Sample records for log-normal size distribution

  1. Gradually truncated log-normal in USA publicly traded firm size distribution

    NASA Astrophysics Data System (ADS)

    Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.

    2007-03-01

    We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.

  2. Log-Normal Distribution of Cosmic Voids in Simulations and Mocks

    NASA Astrophysics Data System (ADS)

    Russell, E.; Pycke, J.-R.

    2017-01-01

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  3. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less

  4. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    NASA Technical Reports Server (NTRS)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  5. Distribution of transvascular pathway sizes through the pulmonary microvascular barrier.

    PubMed

    McNamee, J E

    1987-01-01

    Mathematical models of solute and water exchange in the lung have been helpful in understanding factors governing the volume flow rate and composition of pulmonary lymph. As experimental data and models become more encompassing, parameter identification becomes more difficult. Pore sizes in these models should approach and eventually become equivalent to actual physiological pathway sizes as more complex and accurate models are tried. However, pore sizes and numbers vary from model to model as new pathway sizes are added. This apparent inconsistency of pore sizes can be explained if it is assumed that the pulmonary blood-lymph barrier is widely heteroporous, for example, being composed of a continuous distribution of pathway sizes. The sieving characteristics of the pulmonary barrier are reproduced by a log normal distribution of pathway sizes (log mean = -0.20, log s.d. = 1.05). A log normal distribution of pathways in the microvascular barrier is shown to follow from a rather general assumption about the nature of the pulmonary endothelial junction.

  6. [Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].

    PubMed

    Zhu, Chun; Zhang, Xu

    2010-10-01

    Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.

  7. Fluctuations in email size

    NASA Astrophysics Data System (ADS)

    Matsubara, Yoshitsugu; Musashi, Yasuo

    2017-12-01

    The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.

  8. Size distribution of submarine landslides along the U.S. Atlantic margin

    USGS Publications Warehouse

    Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.

    2009-01-01

    Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.

  9. Box-Cox transformation of firm size data in statistical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  10. Inferring local competition intensity from patch size distributions: a test using biological soil crusts

    USGS Publications Warehouse

    Bowker, Matthew A.; Maestre, Fernando T.

    2012-01-01

    Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions and, with refinement, may help us infer cryptic ecological processes from easily observed spatial patterns in the field.

  11. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  12. Optical and physical properties of stratospheric aerosols from balloon measurements in the visible and near-infrared domains. I. Analysis of aerosol extinction spectra from the AMON and SALOMON balloonborne spectrometers

    NASA Astrophysics Data System (ADS)

    Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel

    2002-12-01

    Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.

  13. Growth models and the expected distribution of fluctuating asymmetry

    USGS Publications Warehouse

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  14. Bidisperse and polydisperse suspension rheology at large solid fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.

    At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less

  15. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  16. Ejected Particle Size Distributions from Shocked Metal Surfaces

    DOE PAGES

    Schauer, M. M.; Buttler, W. T.; Frayer, D. K.; ...

    2017-04-12

    Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.

  17. Ejected Particle Size Distributions from Shocked Metal Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schauer, M. M.; Buttler, W. T.; Frayer, D. K.

    Here, we present size distributions for particles ejected from features machined onto the surface of shocked Sn targets. The functional form of the size distributions is assumed to be log-normal, and the characteristic parameters of the distribution are extracted from the measured angular distribution of light scattered from a laser beam incident on the ejected particles. We also found strong evidence for a bimodal distribution of particle sizes with smaller particles evolved from features machined into the target surface and larger particles being produced at the edges of these features.

  18. Analysis of field size distributions, LACIE test sites 5029, 5033, and 5039, Anhwei Province, People's Republic of China

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1976-01-01

    A study was made of the field size distributions for LACIE test sites 5029, 5033, and 5039, People's Republic of China. Field lengths and widths were measured from LANDSAT imagery, and field area was statistically modeled. Field size parameters have log-normal or Poisson frequency distributions. These were normalized to the Gaussian distribution and theoretical population curves were made. When compared to fields in other areas of the same country measured in the previous study, field lengths and widths in the three LACIE test sites were 2 to 3 times smaller and areas were smaller by an order of magnitude.

  19. Empirical study of the tails of mutual fund size

    NASA Astrophysics Data System (ADS)

    Schwarzkopf, Yonathan; Farmer, J. Doyne

    2010-06-01

    The mutual fund industry manages about a quarter of the assets in the U.S. stock market and thus plays an important role in the U.S. economy. The question of how much control is concentrated in the hands of the largest players is best quantitatively discussed in terms of the tail behavior of the mutual fund size distribution. We study the distribution empirically and show that the tail is much better described by a log-normal than a power law, indicating less concentration than, for example, personal income. The results are highly statistically significant and are consistent across fifteen years. This contradicts a recent theory concerning the origin of the power law tails of the trading volume distribution. Based on the analysis in a companion paper, the log-normality is to be expected, and indicates that the distribution of mutual funds remains perpetually out of equilibrium.

  20. ELISPOTs Produced by CD8 and CD4 Cells Follow Log Normal Size Distribution Permitting Objective Counting

    PubMed Central

    Karulin, Alexey Y.; Karacsony, Kinga; Zhang, Wenji; Targoni, Oleg S.; Moldovan, Ioana; Dittrich, Marcus; Sundararaman, Srividya; Lehmann, Paul V.

    2015-01-01

    Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. PMID:25612115

  1. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  2. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    NASA Astrophysics Data System (ADS)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  3. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  4. Observed, unknown distributions of clinical chemical quantities should be considered to be log-normal: a proposal.

    PubMed

    Haeckel, Rainer; Wosniok, Werner

    2010-10-01

    The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.

  5. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  6. Money-center structures in dynamic banking systems

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; Zhang, Minghui

    2016-10-01

    In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.

  7. The missing impact craters on Venus

    NASA Technical Reports Server (NTRS)

    Speidel, D. H.

    1993-01-01

    The size-frequency pattern of the 842 impact craters on Venus measured to date can be well described (across four standard deviation units) as a single log normal distribution with a mean crater diameter of 14.5 km. This result was predicted in 1991 on examination of the initial Magellan analysis. If this observed distribution is close to the real distribution, the 'missing' 90 percent of the small craters and the 'anomalous' lack of surface splotches may thus be neither missing nor anomalous. I think that the missing craters and missing splotches can be satisfactorily explained by accepting that the observed distribution approximates the real one, that it is not craters that are missing but the impactors. What you see is what you got. The implication that Venus crossing impactors would have the same type of log normal distribution is consistent with recently described distribution for terrestrial craters and Earth crossing asteroids.

  8. Role of Demographic Dynamics and Conflict in the Population-Area Relationship for Human Languages

    PubMed Central

    Manrubia, Susanna C.; Axelsen, Jacob B.; Zanette, Damián H.

    2012-01-01

    Many patterns displayed by the distribution of human linguistic groups are similar to the ecological organization described for biological species. It remains a challenge to identify simple and meaningful processes that describe these patterns. The population size distribution of human linguistic groups, for example, is well fitted by a log-normal distribution that may arise from stochastic demographic processes. As we show in this contribution, the distribution of the area size of home ranges of those groups also agrees with a log-normal function. Further, size and area are significantly correlated: the number of speakers and the area spanned by linguistic groups follow the allometric relation , with an exponent varying accross different world regions. The empirical evidence presented leads to the hypothesis that the distributions of and , and their mutual dependence, rely on demographic dynamics and on the result of conflicts over territory due to group growth. To substantiate this point, we introduce a two-variable stochastic multiplicative model whose analytical solution recovers the empirical observations. Applied to different world regions, the model reveals that the retreat in home range is sublinear with respect to the decrease in population size, and that the population-area exponent grows with the typical strength of conflicts. While the shape of the population size and area distributions, and their allometric relation, seem unavoidable outcomes of demography and inter-group contact, the precise value of could give insight on the cultural organization of those human groups in the last thousand years. PMID:22815726

  9. Methane Leaks from Natural Gas Systems Follow Extreme Distributions.

    PubMed

    Brandt, Adam R; Heath, Garvin A; Cooley, Daniel

    2016-11-15

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.

  10. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  11. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    USGS Publications Warehouse

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  12. The Italian primary school-size distribution and the city-size: a complex nexus

    NASA Astrophysics Data System (ADS)

    Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.

    2014-06-01

    We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.

  13. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  14. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  15. Potential source identification for aerosol concentrations over a site in Northwestern India

    NASA Astrophysics Data System (ADS)

    Payra, Swagata; Kumar, Pramod; Verma, Sunita; Prakash, Divya; Soni, Manish

    2016-03-01

    The collocated measurements of aerosols size distribution (ASD) and aerosol optical thickness (AOT) are analyzed simultaneously using Grimm aerosol spectrometer and MICROTOP II Sunphotometer over Jaipur, capital of Rajasthan in India. The contrast temperature characteristics during winter and summer seasons of year 2011 are investigated in the present study. The total aerosol number concentration (TANC, 0.3-20 μm) during winter season was observed higher than in summer time and it was dominated by fine aerosol number concentration (FANC < 2 μm). Particles smaller than 0.8 μm (at aerodynamic size) constitute ~ 99% of all particles in winter and ~ 90% of particles in summer season. However, particles greater than 2 μm contribute ~ 3% and ~ 0.2% in summer and winter seasons respectively. The aerosols optical thickness shows nearly similar AOT values during summer and winter but corresponding low Angstrom Exponent (AE) values during summer than winter, respectively. In this work, Potential Source Contribution Function (PSCF) analysis is applied to identify locations of sources that influenced concentrations of aerosols over study area in two different seasons. PSCF analysis shows that the dust particles from Thar Desert contribute significantly to the coarse aerosol number concentration (CANC). Higher values of the PSCF in north from Jaipur showed the industrial areas in northern India to be the likely sources of fine particles. The variation in size distribution of aerosols during two seasons is clearly reflected in the log normal size distribution curves. The log normal size distribution curves reveals that the particle size less than 0.8 μm is the key contributor in winter for higher ANC.

  16. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  17. A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Matney, Mark J.

    2008-01-01

    We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.

  18. Measuring firm size distribution with semi-nonparametric densities

    NASA Astrophysics Data System (ADS)

    Cortés, Lina M.; Mora-Valencia, Andrés; Perote, Javier

    2017-11-01

    In this article, we propose a new methodology based on a (log) semi-nonparametric (log-SNP) distribution that nests the lognormal and enables better fits in the upper tail of the distribution through the introduction of new parameters. We test the performance of the lognormal and log-SNP distributions capturing firm size, measured through a sample of US firms in 2004-2015. Taking different levels of aggregation by type of economic activity, our study shows that the log-SNP provides a better fit of the firm size distribution. We also formally introduce the multivariate log-SNP distribution, which encompasses the multivariate lognormal, to analyze the estimation of the joint distribution of the value of the firm's assets and sales. The results suggest that sales are a better firm size measure, as indicated by other studies in the literature.

  19. On the scaling of the distribution of daily price fluctuations in the Mexican financial market index

    NASA Astrophysics Data System (ADS)

    Alfonso, Léster; Mansilla, Ricardo; Terrero-Escalante, César A.

    2012-05-01

    In this paper, a statistical analysis of log-return fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of daily data covering the period from 04/09/2000-04/09/2010 was analyzed, and fitted to different distributions. Tests of the goodness of fit were performed in order to quantitatively asses the quality of the estimation. Special attention was paid to the impact of the size of the sample on the estimated decay of the distributions tail. In this study a forceful rejection of normality was obtained. On the other hand, the null hypothesis that the log-fluctuations are fitted to a α-stable Lévy distribution cannot be rejected at the 5% significance level.

  20. Log-normal spray drop distribution...analyzed by two new computer programs

    Treesearch

    Gerald S. Walton

    1968-01-01

    Results of U.S. Forest Service research on chemical insecticides suggest that large drops are not as effective as small drops in carrying insecticides to target insects. Two new computer programs have been written to analyze size distribution properties of drops from spray nozzles. Coded in Fortran IV, the programs have been tested on both the CDC 6400 and the IBM 7094...

  1. Universal Distribution of Litter Decay Rates

    NASA Astrophysics Data System (ADS)

    Forney, D. C.; Rothman, D. H.

    2008-12-01

    Degradation of litter is the result of many physical, chemical and biological processes. The high variability of these processes likely accounts for the progressive slowdown of decay with litter age. This age dependence is commonly thought to result from the superposition of processes with different decay rates k. Here we assume an underlying continuous yet unknown distribution p(k) of decay rates [1]. To seek its form, we analyze the mass-time history of 70 LIDET [2] litter data sets obtained under widely varying conditions. We construct a regularized inversion procedure to find the best fitting distribution p(k) with the least degrees of freedom. We find that the resulting p(k) is universally consistent with a lognormal distribution, i.e.~a Gaussian distribution of log k, characterized by a dataset-dependent mean and variance of log k. This result is supported by a recurring observation that microbial populations on leaves are log-normally distributed [3]. Simple biological processes cause the frequent appearance of the log-normal distribution in ecology [4]. Environmental factors, such as soil nitrate, soil aggregate size, soil hydraulic conductivity, total soil nitrogen, soil denitrification, soil respiration have been all observed to be log-normally distributed [5]. Litter degradation rates depend on many coupled, multiplicative factors, which provides a fundamental basis for the lognormal distribution. Using this insight, we systematically estimated the mean and variance of log k for 512 data sets from the LIDET study. We find the mean strongly correlates with temperature and precipitation, while the variance appears to be uncorrelated with main environmental factors and is thus likely more correlated with chemical composition and/or ecology. Results indicate the possibility that the distribution in rates reflects, at least in part, the distribution of microbial niches. [1] B. P. Boudreau, B.~R. Ruddick, American Journal of Science,291, 507, (1991). [2] M. Harmon, Forest Science Data Bank: TD023 [Database]. LTER Intersite Fine Litter Decomposition Experiment (LIDET): Long-Term Ecological Research, (2007). [3] G.~A. Beattie, S.~E. Lindow, Phytopathology 89, 353 (1999). [4] R.~A. May, Ecology and Evolution of Communities/, A pattern of Species Abundance and Diversity, 81 (1975). [5] T.~B. Parkin, J.~A. Robinson, Advances in Soil Science 20, Analysis of Lognormal Data, 194 (1992).

  2. The Italian primary school-size distribution and the city-size: a complex nexus

    PubMed Central

    Belmonte, Alessandro; Di Clemente, Riccardo; Buldyrev, Sergey V.

    2014-01-01

    We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features. PMID:24954714

  3. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    NASA Astrophysics Data System (ADS)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  4. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  5. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, M; Eickhoff, J; Perlman, S

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less

  6. Empirical analysis on the runners' velocity distribution in city marathons

    NASA Astrophysics Data System (ADS)

    Lin, Zhenquan; Meng, Fan

    2018-01-01

    In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.

  7. Quality of the log-geometric distribution extrapolation for smaller undiscovered oil and gas pool size

    USGS Publications Warehouse

    Chenglin, L.; Charpentier, R.R.

    2010-01-01

    The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.

  8. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  9. Estimating sales and sales market share from sales rank data for consumer appliances

    NASA Astrophysics Data System (ADS)

    Touzani, Samir; Van Buskirk, Robert

    2016-06-01

    Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.

  10. Structural changes of casein micelles in a calcium gradient film.

    PubMed

    Gebhardt, Ronald; Burghammer, Manfred; Riekel, Christian; Roth, Stephan Volkher; Müller-Buschbaum, Peter

    2008-04-09

    Calcium gradients are prepared by sequentially filling a micropipette with casein solutions of varying calcium concentration and spreading them on glass slides. The casein film is formed by a solution casting process, which results in a macroscopically rough surface. Microbeam grazing incidence small-angle X-ray scattering (microGISAXS) is used to investigate the lateral size distribution of three main components in casein films: casein micelles, casein mini-micelles, and micellar calcium phosphate. At length scales within the beam size the film surface is flat and detection of size distribution in a macroscopic casein gradient becomes accessible. The model used to analyze the data is based on a set of three log-normal distributed particle sizes. Increasing calcium concentration causes a decrease in casein micelle diameter while the size of casein mini-micelles increases and micellar calcium phosphate particles remain unchanged.

  11. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  12. Phylogenetic analyses suggest that diversification and body size evolution are independent in insects.

    PubMed

    Rainford, James L; Hofreiter, Michael; Mayhew, Peter J

    2016-01-08

    Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.

  13. Generalised Extreme Value Distributions Provide a Natural Hypothesis for the Shape of Seed Mass Distributions

    PubMed Central

    2015-01-01

    Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed “for normality” but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs), a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species’ life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm. PMID:25830773

  14. Mesh size selectivity of the gillnet in East China Sea

    NASA Astrophysics Data System (ADS)

    Li, L. Z.; Tang, J. H.; Xiong, Y.; Huang, H. L.; Wu, L.; Shi, J. J.; Gao, Y. S.; Wu, F. Q.

    2017-07-01

    A production test using several gillnets with various mesh sizes was carried out to discover the selectivity of gillnets in the East China Sea. The result showed that the composition of the catch species was synthetically affected by panel height and mesh size. The bycatch species of the 10-m nets were more than those of the 6-m nets. For target species, the effect of panel height on juvenile fish was ambiguous, but the number of juvenile fish declined quickly with the increase in mesh size. According to model deviance (D) and Akaike’s information criterion, the bi-normal model provided the best fit for small yellow croaker (Larimichthy polyactis), and the relative retention was 0.2 and 1, respectively. For Chelidonichthys spinosus, the log-normal was the best model; the right tilt of the selectivity curve was obvious and well coincided with the original data. The contact population of small yellow croaker showed a bi-normal distribution, and body lengths ranged from 95 to 215 mm. The contact population of C. spinosus showed a normal distribution, and the body lengths ranged from 95 to 205 mm. These results can provide references for coastal fishery management.

  15. Retrieval of phytoplankton cell size from chlorophyll a specific absorption and scattering spectra of phytoplankton.

    PubMed

    Zhou, Wen; Wang, Guifen; Li, Cai; Xu, Zhantang; Cao, Wenxi; Shen, Fang

    2017-10-20

    Phytoplankton cell size is an important property that affects diverse ecological and biogeochemical processes, and analysis of the absorption and scattering spectra of phytoplankton can provide important information about phytoplankton size. In this study, an inversion method for extracting quantitative phytoplankton cell size data from these spectra was developed. This inversion method requires two inputs: chlorophyll a specific absorption and scattering spectra of phytoplankton. The average equivalent-volume spherical diameter (ESD v ) was calculated as the single size approximation for the log-normal particle size distribution (PSD) of the algal suspension. The performance of this method for retrieving cell size was assessed using the datasets from cultures of 12 phytoplankton species. The estimations of a(λ) and b(λ) for the phytoplankton population using ESD v had mean error values of 5.8%-6.9% and 7.0%-10.6%, respectively, compared to the a(λ) and b(λ) for the phytoplankton populations using the log-normal PSD. The estimated values of C i ESD v were in good agreement with the measurements, with r 2 =0.88 and relative root mean square error (NRMSE)=25.3%, and relatively good performances were also found for the retrieval of ESD v with r 2 =0.78 and NRMSE=23.9%.

  16. Shape of growth-rate distribution determines the type of Non-Gibrat’s Property

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki

    2011-11-01

    In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.

  17. Size distribution of radon daughter particles in uranium mine atmospheres.

    PubMed

    George, A C; Hinchliffe, L; Sladowski, R

    1975-06-01

    The size distribution of radon daughters was measured in several uranium mines using four compact diffusion batteries and a round jet cascade impactor. Simultaneously, measurements were made of uncombined fractions of radon daughters, radon concentration, working level and particle concentration. The size distributions found for radon daughters were log normal. The activity median diameters ranged from 0.09 mum to 0.3 mum with a mean value of 0.17 mum. Geometric standard deviations were in the range from 1.3 to 4 with a mean value of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean value of 0.04. The radon daughter sizes in these mines are greater than the sizes assumed by various authors in calculating respiratory tract dose. The disparity may reflect the widening use of diesel-powered equipment in large uranium mines.

  18. How log-normal is your country? An analysis of the statistical distribution of the exported volumes of products

    NASA Astrophysics Data System (ADS)

    Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea

    2016-10-01

    We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.

  19. Spatial organization of surface nanobubbles and its implications in their formation process.

    PubMed

    Lhuissier, Henri; Lohse, Detlef; Zhang, Xuehua

    2014-02-21

    We study the size and spatial distribution of surface nanobubbles formed by the solvent exchange method to gain insight into the mechanism of their formation. The analysis of Atomic Force Microscopy (AFM) images of nanobubbles formed on a hydrophobic surface reveals that the nanobubbles are not randomly located, which we attribute to the role of the history of nucleation during the formation. Moreover, the size of each nanobubble is found to be strongly correlated with the area of the bubble-depleted zone around it. The precise correlation suggests that the nanobubbles grow by diffusion of the gas from the bulk rather than by diffusion of the gas adsorbed on the surface. Lastly, the size distribution of the nanobubbles is found to be well described by a log-normal distribution.

  20. Log-Normal Turbulence Dissipation in Global Ocean Models

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  1. Studies of Isolated and Non-isolated Photospheric Bright Points in an Active Region Observed by the New Vacuum Solar Telescope

    NASA Astrophysics Data System (ADS)

    Liu, Yanxiao; Xiang, Yongyuan; Erdélyi, Robertus; Liu, Zhong; Li, Dong; Ning, Zongjun; Bi, Yi; Wu, Ning; Lin, Jun

    2018-03-01

    Properties of photospheric bright points (BPs) near an active region have been studied in TiO λ 7058 Å images observed by the New Vacuum Solar Telescope of the Yunnan Observatories. We developed a novel recognition method that was used to identify and track 2010 BPs. The observed evolving BPs are classified into isolated (individual) and non-isolated (where multiple BPs are observed to display splitting and merging behaviors) sets. About 35.1% of BPs are non-isolated. For both isolated and non-isolated BPs, the brightness varies from 0.8 to 1.3 times the average background intensity and follows a Gaussian distribution. The lifetimes of BPs follow a log-normal distribution, with characteristic lifetimes of (267 ± 140) s and (421 ± 255) s, respectively. Their size also follows log-normal distribution, with an average size of about (2.15 ± 0.74) × 104 km2 and (3.00 ± 1.31) × 104 km2 for area, and (163 ± 27) km and (191 ± 40) km for diameter, respectively. Our results indicate that regions with strong background magnetic field have higher BP number density and higher BP area coverage than regions with weak background field. Apparently, the brightness/size of BPs does not depend on the background field. Lifetimes in regions with strong background magnetic field are shorter than those in regions with weak background field, on average.

  2. Grain coarsening in two-dimensional phase-field models with an orientation field

    NASA Astrophysics Data System (ADS)

    Korbuly, Bálint; Pusztai, Tamás; Henry, Hervé; Plapp, Mathis; Apel, Markus; Gránásy, László

    2017-05-01

    In the literature, contradictory results have been published regarding the form of the limiting (long-time) grain size distribution (LGSD) that characterizes the late stage grain coarsening in two-dimensional and quasi-two-dimensional polycrystalline systems. While experiments and the phase-field crystal (PFC) model (a simple dynamical density functional theory) indicate a log-normal distribution, other works including theoretical studies based on conventional phase-field simulations that rely on coarse grained fields, like the multi-phase-field (MPF) and orientation field (OF) models, yield significantly different distributions. In a recent work, we have shown that the coarse grained phase-field models (whether MPF or OF) yield very similar limiting size distributions that seem to differ from the theoretical predictions. Herein, we revisit this problem, and demonstrate in the case of OF models [R. Kobayashi, J. A. Warren, and W. C. Carter, Physica D 140, 141 (2000), 10.1016/S0167-2789(00)00023-3; H. Henry, J. Mellenthin, and M. Plapp, Phys. Rev. B 86, 054117 (2012), 10.1103/PhysRevB.86.054117] that an insufficient resolution of the small angle grain boundaries leads to a log-normal distribution close to those seen in the experiments and the molecular scale PFC simulations. Our paper indicates, furthermore, that the LGSD is critically sensitive to the details of the evaluation process, and raises the possibility that the differences among the LGSD results from different sources may originate from differences in the detection of small angle grain boundaries.

  3. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    NASA Astrophysics Data System (ADS)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  4. Frequency distribution of lithium in leaves of Lycium andersonii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romney, E.M.; Wallace, A.; Kinnear, J.

    1977-01-01

    Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less

  5. Thermal and log-normal distributions of plasma in laser driven Coulomb explosions of deuterium clusters

    NASA Astrophysics Data System (ADS)

    Barbarino, M.; Warrens, M.; Bonasera, A.; Lattuada, D.; Bang, W.; Quevedo, H. J.; Consoli, F.; de Angelis, R.; Andreoli, P.; Kimura, S.; Dyer, G.; Bernstein, A. C.; Hagel, K.; Barbui, M.; Schmidt, K.; Gaul, E.; Donovan, M. E.; Natowitz, J. B.; Ditmire, T.

    2016-08-01

    In this work, we explore the possibility that the motion of the deuterium ions emitted from Coulomb cluster explosions is highly disordered enough to resemble thermalization. We analyze the process of nuclear fusion reactions driven by laser-cluster interactions in experiments conducted at the Texas Petawatt laser facility using a mixture of D2+3He and CD4+3He cluster targets. When clusters explode by Coulomb repulsion, the emission of the energetic ions is “nearly” isotropic. In the framework of cluster Coulomb explosions, we analyze the energy distributions of the ions using a Maxwell-Boltzmann (MB) distribution, a shifted MB distribution (sMB), and the energy distribution derived from a log-normal (LN) size distribution of clusters. We show that the first two distributions reproduce well the experimentally measured ion energy distributions and the number of fusions from d-d and d-3He reactions. The LN distribution is a good representation of the ion kinetic energy distribution well up to high momenta where the noise becomes dominant, but overestimates both the neutron and the proton yields. If the parameters of the LN distributions are chosen to reproduce the fusion yields correctly, the experimentally measured high energy ion spectrum is not well represented. We conclude that the ion kinetic energy distribution is highly disordered and practically not distinguishable from a thermalized one.

  6. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  7. Coverage dependent molecular assembly of anthraquinone on Au(111)

    NASA Astrophysics Data System (ADS)

    DeLoach, Andrew S.; Conrad, Brad R.; Einstein, T. L.; Dougherty, Daniel B.

    2017-11-01

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  8. Coverage dependent molecular assembly of anthraquinone on Au(111).

    PubMed

    DeLoach, Andrew S; Conrad, Brad R; Einstein, T L; Dougherty, Daniel B

    2017-11-14

    A scanning tunneling microscopy study of anthraquinone (AQ) on the Au(111) surface shows that the molecules self-assemble into several structures depending on the local surface coverage. At high coverages, a close-packed saturated monolayer is observed, while at low coverages, mobile surface molecules coexist with stable chiral hexamer clusters. At intermediate coverages, a disordered 2D porous network interlinking close-packed islands is observed in contrast to the giant honeycomb networks observed for the same molecule on Cu(111). This difference verifies the predicted extreme sensitivity [J. Wyrick et al., Nano Lett. 11, 2944 (2011)] of the pore network to small changes in the surface electronic structure. Quantitative analysis of the 2D pore network reveals that the areas of the vacancy islands are distributed log-normally. Log-normal distributions are typically associated with the product of random variables (multiplicative noise), and we propose that the distribution of pore sizes for AQ on Au(111) originates from random linear rate constants for molecules to either desorb from the surface or detach from the region of a nucleated pore.

  9. Casein micelles: size distribution in milks from individual cows.

    PubMed

    de Kruif, C G Kees; Huppertz, Thom

    2012-05-09

    The size distribution and protein composition of casein micelles in the milk of Holstein-Friesian cows was determined as a function of stage and number of lactations. Protein composition did not vary significantly between the milks of different cows or as a function of lactation stage. Differences in the size and polydispersity of the casein micelles were observed between the milks of different cows, but not as a function of stage of milking or stage of lactation and not even over successive lactations periods. Modal radii varied from 55 to 70 nm, whereas hydrodynamic radii at a scattering angle of 73° (Q² = 350 μm⁻²) varied from 77 to 115 nm and polydispersity varied from 0.27 to 0.41, in a log-normal distribution. Casein micelle size in the milks of individual cows was not correlated with age, milk production, or lactation stage of the cows or fat or protein content of the milk.

  10. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  11. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  12. Statistical analysis of variability properties of the Kepler blazar W2R 1926+42

    NASA Astrophysics Data System (ADS)

    Li, Yutong; Hu, Shaoming; Wiita, Paul J.; Gupta, Alok C.

    2018-04-01

    We analyzed Kepler light curves of the blazar W2R 1926+42 that provided nearly continuous coverage from quarter 11 through quarter 17 (589 days between 2011 and 2013) and examined some of their flux variability properties. We investigate the possibility that the light curve is dominated by a large number of individual flares and adopt exponential rise and decay models to investigate the symmetry properties of flares. We found that those variations of W2R 1926+42 are predominantly asymmetric with weak tendencies toward positive asymmetry (rapid rise and slow decay). The durations (D) and the amplitudes (F0) of flares can be fit with log-normal distributions. The energy (E) of each flare is also estimated for the first time. There are positive correlations between logD and logE with a slope of 1.36, and between logF0 and logE with a slope of 1.12. Lomb-Scargle periodograms are used to estimate the power spectral density (PSD) shape. It is well described by a power law with an index ranging between -1.1 and -1.5. The sizes of the emission regions, R, are estimated to be in the range of 1.1 × 1015cm - 6.6 × 1016cm. The flare asymmetry is difficult to explain by a light travel time effect but may be caused by differences between the timescales for acceleration and dissipation of high-energy particles in the relativistic jet. A jet-in-jet model also could produce the observed log-normal distributions.

  13. Effect of particle size distribution on permeability in the randomly packed porous media

    NASA Astrophysics Data System (ADS)

    Markicevic, Bojan

    2017-11-01

    An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.

  14. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    PubMed

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  15. Beyond the power law: Uncovering stylized facts in interbank networks

    NASA Astrophysics Data System (ADS)

    Vandermarliere, Benjamin; Karas, Alexei; Ryckebusch, Jan; Schoors, Koen

    2015-06-01

    We use daily data on bilateral interbank exposures and monthly bank balance sheets to study network characteristics of the Russian interbank market over August 1998-October 2004. Specifically, we examine the distributions of (un)directed (un)weighted degree, nodal attributes (bank assets, capital and capital-to-assets ratio) and edge weights (loan size and counterparty exposure). We search for the theoretical distribution that fits the data best and report the "best" fit parameters. We observe that all studied distributions are heavy tailed. The fat tail typically contains 20% of the data and can be mostly described well by a truncated power law. Also the power law, stretched exponential and log-normal provide reasonably good fits to the tails of the data. In most cases, however, separating the bulk and tail parts of the data is hard, so we proceed to study the full range of the events. We find that the stretched exponential and the log-normal distributions fit the full range of the data best. These conclusions are robust to (1) whether we aggregate the data over a week, month, quarter or year; (2) whether we look at the "growth" versus "maturity" phases of interbank market development; and (3) with minor exceptions, whether we look at the "normal" versus "crisis" operation periods. In line with prior research, we find that the network topology changes greatly as the interbank market moves from a "normal" to a "crisis" operation period.

  16. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Evaluation of waste mushroom logs as a potential biomass resource for the production of bioethanol.

    PubMed

    Lee, Jae-Won; Koo, Bon-Wook; Choi, Joon-Weon; Choi, Don-Ha; Choi, In-Gyu

    2008-05-01

    In order to investigate the possibility of using waste mushroom logs as a biomass resource for alternative energy production, the chemical and physical characteristics of normal wood and waste mushroom logs were examined. Size reduction of normal wood (145 kW h/tone) required significantly higher energy consumption than waste mushroom logs (70 kW h/tone). The crystallinity value of waste mushroom logs was dramatically lower (33%) than normal wood (49%) after cultivation by Lentinus edodes as spawn. Lignin, an enzymatic hydrolysis inhibitor in sugar production, decreased from 21.07% to 18.78% after inoculation of L. edodes. Total sugar yields obtained by enzyme and acid hydrolysis were higher in waste mushroom logs than in normal wood. After 24h fermentation, 12 g/L ethanol was produced on waste mushroom logs, while normal wood produced 8 g/L ethanol. These results indicate that waste mushroom logs are economically suitable lignocellulosic material for the production of fermentable sugars related to bioethanol production.

  18. Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    PubMed Central

    Limpert, Eckhard; Stahel, Werner A.

    2011-01-01

    Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325

  19. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    PubMed

    Limpert, Eckhard; Stahel, Werner A

    2011-01-01

    The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  20. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  1. Uniwavelength lidar sensitivity to spherical aerosol microphysical properties for the interpretation of Lagrangian stratospheric observations

    NASA Astrophysics Data System (ADS)

    Jumelet, Julien; David, Christine; Bekki, Slimane; Keckhut, Philippe

    2009-01-01

    The determination of stratospheric particle microphysical properties from multiwavelength lidar, including Rayleigh and/or Raman detection, has been widely investigated. However, most lidar systems are uniwavelength operating at 532 nm. Although the information content of such lidar data is too limited to allow the retrieval of the full size distribution, the coupling of two or more uniwavelength lidar measurements probing the same moving air parcel may provide some meaningful size information. Within the ORACLE-O3 IPY project, the coordination of several ground-based lidars and the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) space-borne lidar is planned during measurement campaigns called MATCH-PSC (Polar Stratospheric Clouds). While probing the same moving air masses, the evolution of the measured backscatter coefficient (BC) should reflect the variation of particles microphysical properties. A sensitivity study of 532 nm lidar particle backscatter to variations of particles size distribution parameters is carried out. For simplicity, the particles are assumed to be spherical (liquid) particles and the size distribution is represented with a unimodal log-normal distribution. Each of the four microphysical parameters (i.e. log-normal size distribution parameters, refractive index) are analysed separately, while the three others are remained set to constant reference values. Overall, the BC behaviour is not affected by the initial values taken as references. The total concentration (N0) is the parameter to which BC is least sensitive, whereas it is most sensitive to the refractive index (m). A 2% variation of m induces a 15% variation of the lidar BC, while the uncertainty on the BC retrieval can also reach 15%. This result underlines the importance of having both an accurate lidar inversion method and a good knowledge of the temperature for size distribution retrieval techniques. The standard deviation ([sigma]) is the second parameter to which BC is most sensitive to. Yet, the impact of m and [sigma] on BC variations is limited by the realistic range of their variations. The mean radius (rm) of the size distribution is thus the key parameter for BC, as it can vary several-fold. BC is most sensitive to the presence of large particles. The sensitivity of BC to rm and [sigma] variations increases when the initial size distributions are characterized by low rm and large [sigma]. This makes lidar more suitable to detect particles growing on background aerosols than on volcanic aerosols.

  2. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  3. On the Use of the Log-Normal Particle Size Distribution to Characterize Global Rain

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Rincon, Rafael; Liao, Liang

    2003-01-01

    Although most parameterizations of the drop size distributions (DSD) use the gamma function, there are several advantages to the log-normal form, particularly if we want to characterize the large scale space-time variability of the DSD and rain rate. The advantages of the distribution are twofold: the logarithm of any moment can be expressed as a linear combination of the individual parameters of the distribution; the parameters of the distribution are approximately normally distributed. Since all radar and rainfall-related parameters can be written approximately as a moment of the DSD, the first property allows us to express the logarithm of any radar/rainfall variable as a linear combination of the individual DSD parameters. Another consequence is that any power law relationship between rain rate, reflectivity factor, specific attenuation or water content can be expressed in terms of the covariance matrix of the DSD parameters. The joint-normal property of the DSD parameters has applications to the description of the space-time variation of rainfall in the sense that any radar-rainfall quantity can be specified by the covariance matrix associated with the DSD parameters at two arbitrary space-time points. As such, the parameterization provides a means by which we can use the spaceborne radar-derived DSD parameters to specify in part the covariance matrices globally. However, since satellite observations have coarse temporal sampling, the specification of the temporal covariance must be derived from ancillary measurements and models. Work is presently underway to determine whether the use of instantaneous rain rate data from the TRMM Precipitation Radar can provide good estimates of the spatial correlation in rain rate from data collected in 5(sup 0)x 5(sup 0) x 1 month space-time boxes. To characterize the temporal characteristics of the DSD parameters, disdrometer data are being used from the Wallops Flight Facility site where as many as 4 disdrometers have been used to acquire data over a 2 km path. These data should help quantify the temporal form of the covariance matrix at this site.

  4. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  5. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  6. [Distribution of individuals by spontaneous frequencies of lymphocytes with micronuclei. Particularity and consequences].

    PubMed

    Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I

    2011-01-01

    By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.

  7. Statistical distribution of building lot frontage: application for Tokyo downtown districts

    NASA Astrophysics Data System (ADS)

    Usui, Hiroyuki

    2018-03-01

    The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.

  8. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  9. Correlation between size distribution and luminescence properties of spool-shaped InAs quantum dots

    NASA Astrophysics Data System (ADS)

    Xie, H.; Prioli, R.; Torelly, G.; Liu, H.; Fischer, A. M.; Jakomin, R.; Mourão, R.; Kawabata, R.; Pires, M. P.; Souza, P. L.; Ponce, F. A.

    2017-05-01

    InAs QDs embedded in an AlGaAs matrix have been produced by MOVPE with a partial capping and annealing technique to achieve controllable QD energy levels that could be useful for solar cell applications. The resulted spool-shaped QDs are around 5 nm in height and have a log-normal diameter distribution, which is observed by TEM to range from 5 to 15 nm. Two photoluminescence peaks associated with QD emission are attributed to the ground and the first excited states transitions. The luminescence peak width is correlated with the distribution of QD diameters through the diameter dependent QD energy levels.

  10. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    NASA Astrophysics Data System (ADS)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  11. Ventilation-perfusion distribution in normal subjects.

    PubMed

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  12. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Detection of vapor nanobubbles by small angle neutron scattering (SANS)

    NASA Astrophysics Data System (ADS)

    Popov, Emilian; He, Lilin; Dominguez-Ontiveros, Elvis; Melnichenko, Yuri

    2018-04-01

    Experiments using boiling water on untreated (roughness 100-300 nm) metal surfaces using small-angle neutron scattering (SANS) show the appearance of structures that are 50-70 nm in size when boiling is present. The scattering signal disappears when the boiling ceases, and no change in the signal is detected at any surface temperature condition below saturation. This confirms that the signal is caused by vapor nanobubbles. Two boiling regimes are evaluated herein that differ by the degree of subcooling (3-10 °C). A polydisperse spherical model with a log-normal distribution fits the SANS data well. The size distribution indicates that a large number of nanobubbles exist on the surface during boiling, and some of them grow into large bubbles.

  14. Effects of composition of grains of debris flow on its impact force

    NASA Astrophysics Data System (ADS)

    Tang, jinbo; Hu, Kaiheng; Cui, Peng

    2017-04-01

    Debris flows compose of solid material with broad size distribution from fine sand to boulders. Impact force imposed by debris flows is a very important issue for protection engineering design and strongly influenced by their grain composition. However, this issue has not been studied in depth and the effects of grain composition not been considered in the calculation of the impact force. In this present study, the small-scale flume experiments with five kinds of compositions of grains for debris flow were carried out to study the effect of the composition of grains of debris flow on its impact force. The results show that the impact force of debris flow increases with the grain size, the hydrodynamic pressure of debris flow is calibrated based on the normalization parameter dmax/d50, in which dmax is the maximum size and d50 is the median size. Furthermore, a log-logistic statistic distribution could be used to describe the distribution of magnitude of impact force of debris flow, where the mean and the variance of the present distribution increase with grain size. This distribution proposed in the present study could be used to the reliability analysis of structures impacted by debris flow.

  15. Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix

    NASA Astrophysics Data System (ADS)

    Saxena, Nupur; Kumar, Pragati; Gupta, Vinay

    2015-05-01

    Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.

  16. Neuropsychological constraints to human data production on a global scale

    NASA Astrophysics Data System (ADS)

    Gros, C.; Kaczor, G.; Marković, D.

    2012-01-01

    Which are the factors underlying human information production on a global level? In order to gain an insight into this question we study a corpus of 252-633 mil. publicly available data files on the Internet corresponding to an overall storage volume of 284-675 Terabytes. Analyzing the file size distribution for several distinct data types we find indications that the neuropsychological capacity of the human brain to process and record information may constitute the dominant limiting factor for the overall growth of globally stored information, with real-world economic constraints having only a negligible influence. This supposition draws support from the observation that the files size distributions follow a power law for data without a time component, like images, and a log-normal distribution for multimedia files, for which time is a defining qualia.

  17. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    USGS Publications Warehouse

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  18. Temperature dependence of electron magnetic resonance spectra of iron oxide nanoparticles mineralized in Listeria innocua protein cages

    NASA Astrophysics Data System (ADS)

    Usselman, Robert J.; Russek, Stephen E.; Klem, Michael T.; Allen, Mark A.; Douglas, Trevor; Young, Mark; Idzerda, Yves U.; Singel, David J.

    2012-10-01

    Electron magnetic resonance (EMR) spectroscopy was used to determine the magnetic properties of maghemite (γ-Fe2O3) nanoparticles formed within size-constraining Listeria innocua (LDps)-(DNA-binding protein from starved cells) protein cages that have an inner diameter of 5 nm. Variable-temperature X-band EMR spectra exhibited broad asymmetric resonances with a superimposed narrow peak at a gyromagnetic factor of g ≈ 2. The resonance structure, which depends on both superparamagnetic fluctuations and inhomogeneous broadening, changes dramatically as a function of temperature, and the overall linewidth becomes narrower with increasing temperature. Here, we compare two different models to simulate temperature-dependent lineshape trends. The temperature dependence for both models is derived from a Langevin behavior of the linewidth resulting from "anisotropy melting." The first uses either a truncated log-normal distribution of particle sizes or a bi-modal distribution and then a Landau-Liftshitz lineshape to describe the nanoparticle resonances. The essential feature of this model is that small particles have narrow linewidths and account for the g ≈ 2 feature with a constant resonance field, whereas larger particles have broad linewidths and undergo a shift in resonance field. The second model assumes uniform particles with a diameter around 4 nm and a random distribution of uniaxial anisotropy axes. This model uses a more precise calculation of the linewidth due to superparamagnetic fluctuations and a random distribution of anisotropies. Sharp features in the spectrum near g ≈ 2 are qualitatively predicted at high temperatures. Both models can account for many features of the observed spectra, although each has deficiencies. The first model leads to a nonphysical increase in magnetic moment as the temperature is increased if a log normal distribution of particles sizes is used. Introducing a bi-modal distribution of particle sizes resolves the unphysical increase in moment with temperature. The second model predicts low-temperature spectra that differ significantly from the observed spectra. The anisotropy energy density K1, determined by fitting the temperature-dependent linewidths, was ˜50 kJ/m3, which is considerably larger than that of bulk maghemite. The work presented here indicates that the magnetic properties of these size-constrained nanoparticles and more generally metal oxide nanoparticles with diameters d < 5 nm are complex and that currently existing models are not sufficient for determining their magnetic resonance signatures.

  19. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts.

    PubMed

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2014-05-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.

  20. Statistical Considerations of Data Processing in Giovanni Online Tool

    NASA Technical Reports Server (NTRS)

    Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.

    2005-01-01

    The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.

  1. Stick-slip behavior in a continuum-granular experiment.

    PubMed

    Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott

    2015-12-01

    We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.

  2. Photoballistics of volcanic jet activity at Stromboli, Italy

    NASA Technical Reports Server (NTRS)

    Chouet, B.; Hamisevicz, N.; Mcgetchin, T. R.

    1974-01-01

    Two night eruptions of the volcano Stromboli were studied through 70-mm photography. Single-camera techniques were used. Particle sphericity, constant velocity in the frame, and radial symmetry were assumed. Properties of the particulate phase found through analysis include: particle size, velocity, total number of particles ejected, angular dispersion and distribution in the jet, time variation of particle size and apparent velocity distribution, averaged volume flux, and kinetic energy carried by the condensed phase. The frequency distributions of particle size and apparent velocities are found to be approximately log normal. The properties of the gas phase were inferred from the fact that it was the transporting medium for the condensed phase. Gas velocity and time variation, volume flux of gas, dynamic pressure, mass erupted, and density were estimated. A CO2-H2O mixture is possible for the observed eruptions. The flow was subsonic. Velocity variations may be explained by an organ pipe resonance. Particle collimation may be produced by a Magnus effect.

  3. Atomic force microscope observation of branching in single transcript molecules derived from human cardiac muscle

    NASA Astrophysics Data System (ADS)

    Reed, Jason; Hsueh, Carlin; Mishra, Bud; Gimzewski, James K.

    2008-09-01

    We have used an atomic force microscope to examine a clinically derived sample of single-molecule gene transcripts, in the form of double-stranded cDNA, (c: complementary) obtained from human cardiac muscle without the use of polymerase chain reaction (PCR) amplification. We observed a log-normal distribution of transcript sizes, with most molecules being in the range of 0.4-7.0 kilobase pairs (kb) or 130-2300 nm in contour length, in accordance with the expected distribution of mRNA (m: messenger) sizes in mammalian cells. We observed novel branching structures not previously known to exist in cDNA, and which could have profound negative effects on traditional analysis of cDNA samples through cloning, PCR and DNA sequencing.

  4. Influence of particle size distribution on nanopowder cold compaction processes

    NASA Astrophysics Data System (ADS)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  5. Grain size distribution in sheared polycrystals

    NASA Astrophysics Data System (ADS)

    Sarkar, Tanmoy; Biswas, Santidan; Chaudhuri, Pinaki; Sain, Anirban

    2017-12-01

    Plastic deformation in solids induced by external stresses is of both fundamental and practical interest. Using both phase field crystal modeling and molecular dynamics simulations, we study the shear response of monocomponent polycrystalline solids. We subject mesocale polycrystalline samples to constant strain rates in a planar Couette flow geometry for studying its plastic flow, in particular its grain deformation dynamics. As opposed to equilibrium solids where grain dynamics is mainly driven by thermal diffusion, external stress/strain induce a much higher level of grain deformation activity in the form of grain rotation, coalescence, and breakage, mediated by dislocations. Despite this, the grain size distribution of this driven system shows only a weak power-law correction to its equilibrium log-normal behavior. We interpret the grain reorganization dynamics using a stochastic model.

  6. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  7. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  8. Analyzing coastal environments by means of functional data analysis

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  9. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    PubMed

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.

  10. Characterization of airborne particles in an open pit mining region.

    PubMed

    Huertas, José I; Huertas, María E; Solís, Dora A

    2012-04-15

    We characterized airborne particle samples collected from 15 stations in operation since 2007 in one of the world's largest opencast coal mining regions. Using gravimetric, scanning electron microscopy (SEM-EDS), and X-ray photoelectron spectroscopy (XPS) analysis the samples were characterized in terms of concentration, morphology, particle size distribution (PSD), and elemental composition. All of the total suspended particulate (TSP) samples exhibited a log-normal PSD with a mean of d=5.46 ± 0.32 μm and σ(ln d)=0.61 ± 0.03. Similarly, all particles with an equivalent aerodynamic diameter less than 10 μm (PM(10)) exhibited a log-normal type distribution with a mean of d=3.6 ± 0.38 μm and σ(ln d)=0.55 ± 0.03. XPS analysis indicated that the main elements present in the particles were carbon, oxygen, potassium, and silicon with average mass concentrations of 41.5%, 34.7%, 11.6%, and 5.7% respectively. In SEM micrographs the particles appeared smooth-surfaced and irregular in shape, and tended to agglomerate. The particles were typically clay minerals, including limestone, calcite, quartz, and potassium feldspar. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  12. The VMC Survey. XXIX. Turbulence-controlled Hierarchical Star Formation in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Sun, Ning-Chen; de Grijs, Richard; Cioni, Maria-Rosa L.; Rubele, Stefano; Subramanian, Smitha; van Loon, Jacco Th.; Bekki, Kenji; Bell, Cameron P. M.; Ivanov, Valentin D.; Marconi, Marcella; Muraveva, Tatiana; Oliveira, Joana M.; Ripepi, Vincenzo

    2018-05-01

    In this paper we report a clustering analysis of upper main-sequence stars in the Small Magellanic Cloud, using data from the VMC survey (the VISTA near-infrared YJK s survey of the Magellanic system). Young stellar structures are identified as surface overdensities on a range of significance levels. They are found to be organized in a hierarchical pattern, such that larger structures at lower significance levels contain smaller ones at higher significance levels. They have very irregular morphologies, with a perimeter–area dimension of 1.44 ± 0.02 for their projected boundaries. They have a power-law mass–size relation, power-law size/mass distributions, and a log-normal surface density distribution. We derive a projected fractal dimension of 1.48 ± 0.03 from the mass–size relation, or of 1.4 ± 0.1 from the size distribution, reflecting significant lumpiness of the young stellar structures. These properties are remarkably similar to those of a turbulent interstellar medium, supporting a scenario of hierarchical star formation regulated by supersonic turbulence.

  13. Mass-size distribution and concentration of metals from personal exposure to arc welding fume in pipeline construction: a case report.

    PubMed

    Yang, Show-Yi; Lin, Jia-Ming; Young, Li-Hao; Chang, Ching-Wen

    2018-04-07

    We investigate exposure to welding fume metals in pipeline construction, which are responsible for severe respiratory problems. We analyzed air samples obtained using size-fractioning cascade impactors that were attached to the welders performing shielded metal and gas tungsten arc welding outdoors. Iron, aluminum, zinc, chromium, manganese, copper, nickel, and lead concentrations in the water-soluble (WS) and water-insoluble (WI) portions were determined separately, using inductively coupled plasma mass spectrometry. The mass-size distribution of welding fume matches a log-normal distribution with two modes. The metal concentrations in the welding fume were ranked as follows: Fe > Al > Zn > Cr > Mn > Ni > Cu > Pb. In the WS portion, the capacities of metals dissolving in water are correlated with the metal species but particle sizes. Particularly, Zn, Mn, and Pb exhibit relatively higher capacities than Cu, Cr, Al, Fe, and Ni. Exposure of the gas-exchange region of the lungs to WS metals were in the range of 4.9% to 34.6% of the corresponding metals in air by considering the particle-size selection in lungs, metal composition by particle size, and the capacities of each metal dissolving in water.

  14. Log-amplitude statistics for Beck-Cohen superstatistics

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Konno, Hidetoshi

    2013-05-01

    As a possible generalization of Beck-Cohen superstatistical processes, we study non-Gaussian processes with temporal heterogeneity of local variance. To characterize the variance heterogeneity, we define log-amplitude cumulants and log-amplitude autocovariance and derive closed-form expressions of the log-amplitude cumulants for χ2, inverse χ2, and log-normal superstatistical distributions. Furthermore, we show that χ2 and inverse χ2 superstatistics with degree 2 are closely related to an extreme value distribution, called the Gumbel distribution. In these cases, the corresponding superstatistical distributions result in the q-Gaussian distribution with q=5/3 and the bilateral exponential distribution, respectively. Thus, our finding provides a hypothesis that the asymptotic appearance of these two special distributions may be explained by a link with the asymptotic limit distributions involving extreme values. In addition, as an application of our approach, we demonstrated that non-Gaussian fluctuations observed in a stock index futures market can be well approximated by the χ2 superstatistical distribution with degree 2.

  15. Stochastic modelling of non-stationary financial assets

    NASA Astrophysics Data System (ADS)

    Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.

    2017-11-01

    We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.

  16. Suppression of nucleation mode particles by biomass burning in an urban environment: a case study.

    PubMed

    Agus, Emily L; Lingard, Justin J N; Tomlin, Alison S

    2008-08-01

    Measurements of concentrations and size distributions of particles 4.7 to 160 nm were taken using an SMPS during the bonfire and firework celebrations on Bonfire Night in Leeds, UK, 2006. These celebrations provided an opportunity to study size distributions in a unique atmospheric pollution situation during and following a significant emission event due to open biomass burning. A log-normal fitting program was used to determine the characteristics of the modal groups present within hourly averaged size distributions. Results from the modal fitting showed that on bonfire night the smallest nucleation mode, which was present before and after the bonfire event and on comparison weekends, was not detected within the size distribution. In addition, there was a significant shift in the modal diameters of the remaining modes during the peak of the pollution event. Using the concept of a coagulation sink, the atmospheric lifetimes of smaller particles were significantly reduced during the pollution event, and thus were used to explain the disappearance of the smallest nucleation mode as well as changes in particle count mean diameters. The significance for particle mixing state is discussed.

  17. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Technical note: An improved approach to determining background aerosol concentrations with PILS sampling on aircraft

    NASA Astrophysics Data System (ADS)

    Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.

    2016-07-01

    Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.

  19. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    USGS Publications Warehouse

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  20. Post hoc interlaboratory comparison of single particle ICP-MS size measurements of NIST gold nanoparticle reference materials.

    PubMed

    Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R

    2015-09-01

    Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.

  1. Assessment of the hygienic performances of hamburger patty production processes.

    PubMed

    Gill, C O; Rahn, K; Sloan, K; McMullen, L M

    1997-05-20

    The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.

  2. Design and characterization of a cough simulator.

    PubMed

    Zhang, Bo; Zhu, Chao; Ji, Zhiming; Lin, Chao-Hsin

    2017-02-23

    Expiratory droplets from human coughing have always been considered as potential carriers of pathogens, responsible for respiratory infectious disease transmission. To study the transmission of disease by human coughing, a transient repeatable cough simulator has been designed and built. Cough droplets are generated by different mechanisms, such as the breaking of mucus, condensation and high-speed atomization from different depths of the respiratory tract. These mechanisms in coughing produce droplets of different sizes, represented by a bimodal distribution of 'fine' and 'coarse' droplets. A cough simulator is hence designed to generate transient sprays with such bimodal characteristics. It consists of a pressurized gas tank, a nebulizer and an ejector, connected in series, which are controlled by computerized solenoid valves. The bimodal droplet size distribution is characterized for the coarse droplets and fine droplets, by fibrous collection and laser diffraction, respectively. The measured size distributions of coarse and fine droplets are reasonably represented by the Rosin-Rammler and log-normal distributions in probability density function, which leads to a bimodal distribution. To assess the hydrodynamic consequences of coughing including droplet vaporization and polydispersion, a Lagrangian model of droplet trajectories is established, with its ambient flow field predetermined from a computational fluid dynamics simulation.

  3. Atomisation and droplet formation mechanisms in a model two-phase mixing layer

    NASA Astrophysics Data System (ADS)

    Zaleski, Stephane; Ling, Yue; Fuster, Daniel; Tryggvason, Gretar

    2017-11-01

    We study atomization in a turbulent two-phase mixing layer inspired by the Grenoble air-water experiments. A planar gas jet of large velocity is emitted on top of a planar liquid jet of smaller velocity. The density ratio and momentum ratios are both set at 20 in the numerical simulation in order to ease the simulation. We use a Volume-Of-Fluid method with good parallelisation properties, implemented in our code http://parissimulator.sf.net. Our simulations show two distinct droplet formation mechanisms, one in which thin liquid sheets are punctured to form rapidly expanding holes and the other in which ligaments of irregular shape form and breakup in a manner similar but not identical to jets in Rayleigh-Plateau-Savart instabilities. Observed distributions of particle sizes are extracted for a sequence of ever more refined grids, the largest grid containing approximately eight billion points. Although their accuracy is limited at small sizes by the grid resolution and at large size by statistical effects, the distributions overlap in the central region. The observed distributions are much closer to log normal distributions than to gamma distributions as is also the case for experiments.

  4. Quantitative imaging reveals heterogeneous growth dynamics and treatment-dependent residual tumor distributions in a three-dimensional ovarian cancer model

    NASA Astrophysics Data System (ADS)

    Celli, Jonathan P.; Rizvi, Imran; Evans, Conor L.; Abu-Yousif, Adnan O.; Hasan, Tayyaba

    2010-09-01

    Three-dimensional tumor models have emerged as valuable in vitro research tools, though the power of such systems as quantitative reporters of tumor growth and treatment response has not been adequately explored. We introduce an approach combining a 3-D model of disseminated ovarian cancer with high-throughput processing of image data for quantification of growth characteristics and cytotoxic response. We developed custom MATLAB routines to analyze longitudinally acquired dark-field microscopy images containing thousands of 3-D nodules. These data reveal a reproducible bimodal log-normal size distribution. Growth behavior is driven by migration and assembly, causing an exponential decay in spatial density concomitant with increasing mean size. At day 10, cultures are treated with either carboplatin or photodynamic therapy (PDT). We quantify size-dependent cytotoxic response for each treatment on a nodule by nodule basis using automated segmentation combined with ratiometric batch-processing of calcein and ethidium bromide fluorescence intensity data (indicating live and dead cells, respectively). Both treatments reduce viability, though carboplatin leaves micronodules largely structurally intact with a size distribution similar to untreated cultures. In contrast, PDT treatment disrupts micronodular structure, causing punctate regions of toxicity, shifting the distribution toward smaller sizes, and potentially increasing vulnerability to subsequent chemotherapeutic treatment.

  5. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-12-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≃10-2.5Mpc-1 with the upper limit B≲3nG.

  6. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  7. Optimum size of nanorods for heating application

    NASA Astrophysics Data System (ADS)

    Seshadri, G.; Thaokar, Rochish; Mehra, Anurag

    2014-08-01

    Magnetic nanoparticles (MNP's) have become increasingly important in heating applications such as hyperthermia treatment of cancer due to their ability to release heat when a remote external alternating magnetic field is applied. It has been shown that the heating capability of such particles varies significantly with the size of particles used. In this paper, we theoretically evaluate the heating capability of rod-shaped MNP's and identify conditions under which these particles display highest efficiency. For optimally sized monodisperse particles, the power generated by rod-shaped particles is found to be equal to that generated by spherical particles. However, for particles which are not mono dispersed, rod-shaped particles are found to be more effective in heating as a result of the greater spread in the power density distribution curve. Additionally, for rod-shaped particles, a dispersion in the radius of the particle contributes more to the reduction in loss power when compared to a dispersion in the length. We further identify the optimum size, i.e the radius and length of nanorods, given a bi-variate log-normal distribution of particle size in two dimensions.

  8. Explorations in statistics: the log transformation.

    PubMed

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  9. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  10. Impact of aerosol size representation on modeling aerosol-cloud interactions

    DOE PAGES

    Zhang, Y.; Easter, R. C.; Ghan, S. J.; ...

    2002-11-07

    In this study, we use a 1-D version of a climate-aerosol-chemistry model with both modal and sectional aerosol size representations to evaluate the impact of aerosol size representation on modeling aerosol-cloud interactions in shallow stratiform clouds observed during the 2nd Aerosol Characterization Experiment. Both the modal (with prognostic aerosol number and mass or prognostic aerosol number, surface area and mass, referred to as the Modal-NM and Modal-NSM) and the sectional approaches (with 12 and 36 sections) predict total number and mass for interstitial and activated particles that are generally within several percent of references from a high resolution 108-section approach.more » The modal approach with prognostic aerosol mass but diagnostic number (referred to as the Modal-M) cannot accurately predict the total particle number and surface areas, with deviations from the references ranging from 7-161%. The particle size distributions are sensitive to size representations, with normalized absolute differences of up to 12% and 37% for the 36- and 12-section approaches, and 30%, 39%, and 179% for the Modal-NSM, Modal-NM, and Modal-M, respectively. For the Modal-NSM and Modal-NM, differences from the references are primarily due to the inherent assumptions and limitations of the modal approach. In particular, they cannot resolve the abrupt size transition between the interstitial and activated aerosol fractions. For the 12- and 36-section approaches, differences are largely due to limitations of the parameterized activation for non-log-normal size distributions, plus the coarse resolution for the 12-section case. Differences are larger both with higher aerosol (i.e., less complete activation) and higher SO2 concentrations (i.e., greater modification of the initial aerosol distribution).« less

  11. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  12. Pore-scale modeling of saturated permeabilities in random sphere packings.

    PubMed

    Pan, C; Hilpert, M; Miller, C T

    2001-12-01

    We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.

  13. Best Statistical Distribution of flood variables for Johor River in Malaysia

    NASA Astrophysics Data System (ADS)

    Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.

    2012-12-01

    A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow

  14. A Feasibility Study of Expanding the F404 Aircraft Engine Repair Capability at the Aircraft Intermediate Maintenance Department

    DTIC Science & Technology

    1993-06-01

    1 A. OBJECTIVES ............. .... .................. 1 B. HISTORY ................... .................... 2 C...utilization, and any additional manpower requirements at the "selected" AIMD’s. B. HISTORY Until late 1991 both NADEP JAX and NADEP North Island (NORIS...TRIANGULAR OR ALL LOG NORMAL DISTRIBUTIONS FOR SERVICE TIMES AT AIND CECIL FIELD maintenance/ Triangular Log Normal MAZDA Difference Differe•ce Supply

  15. Viscosity and transient electric birefringence study of clay colloidal aggregation.

    PubMed

    Bakk, Audun; Fossum, Jon O; da Silva, Geraldo J; Adland, Hans M; Mikkelsen, Arne; Elgsaeter, Arnljot

    2002-02-01

    We study a synthetic clay suspension of laponite at different particle and NaCl concentrations by measuring stationary shear viscosity and transient electrically induced birefringence (TEB). On one hand the viscosity data are consistent with the particles being spheres and the particles being associated with large amount bound water. On the other hand the viscosity data are also consistent with the particles being asymmetric, consistent with single laponite platelets associated with a very few monolayers of water. We analyze the TEB data by employing two different models of aggregate size (effective hydrodynamic radius) distribution: (1) bidisperse model and (2) log-normal distributed model. Both models fit, in the same manner, fairly well to the experimental TEB data and they indicate that the suspension consists of polydisperse particles. The models also appear to confirm that the aggregates increase in size vs increasing ionic strength. The smallest particles at low salt concentrations seem to be monomers and oligomers.

  16. Erosion associated with cable and tractor logging in northwestern California

    Treesearch

    R. M. Rice; P. A. Datzman

    1981-01-01

    Abstract - Erosion and site conditions were measured at 102 logged plots in northwestern California. Erosion averaged 26.8 m 3 /ha. A log-normal distribution was a better fit to the data. The antilog of the mean of the logarithms of erosion was 3.2 m 3 /ha. The Coast District Erosion Hazard Rating was a poor predictor of erosion related to logging. In a new equation...

  17. Scaling laws of the size-distribution of monogenetic volcanoes within the Michoacán-Guanajuato Volcanic Field (Mexico)

    NASA Astrophysics Data System (ADS)

    Pérez-López, R.; Legrand, D.; Garduño-Monroy, V. H.; Rodríguez-Pascua, M. A.; Giner-Robles, J. L.

    2011-04-01

    The Michoacán-Guanajuato Volcanic Field displays about 1040 monogenetic volcanoes mainly composed of basaltic cinder cones. This monogenetic volcanic field is the consequence of a dextral transtensive tectonic regime within the Transmexican Volcanic Belt (TMVB), the largest intra continental volcanic arc around the world, related to the subduction of the Rivera and Cocos plates underneath the North American Plate. We performed a statistical analysis for the size-distribution of the basal diameter (Wco) for cinder cones. Dataset used here was compiled by Hasenaka and Carmichael (1985). Monogenetic volcanoes obey a power-law very similar to the Gutenberg-Richter law for earthquakes, with respect to their size-distribution: log 10 ( N >= Wco ) = α - β log10( Wco), with β = 5.01 and α = 2.98. Therefore, the monogenetic volcanoes exhibit a (Wco) size-distribution empirical power-law, suggesting a self-organized criticality phenomenon.

  18. Double stars with wide separations in the AGK3 - II. The wide binaries and the multiple systems*

    NASA Astrophysics Data System (ADS)

    Halbwachs, J.-L.; Mayor, M.; Udry, S.

    2017-02-01

    A large observation programme was carried out to measure the radial velocities of the components of a selection of common proper motion (CPM) stars to select the physical binaries. 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. By adding CPM stars with separations close enough to be almost certain that they are physical, a bias-controlled sample of 116 WBs was obtained, and used to derive the distribution of separations from 100 to 30 000 au. The distribution obtained does not match the log-constant distribution, but agrees with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical information about the multiple systems. The close binaries in WBs seem to be like those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems agrees with the no correlation hypothesis; this indicates that an environment conducive to the formation of WBs does not favour the formation of subsystems with periods shorter than 10 yr.

  19. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  20. Are CO Observations of Interstellar Clouds Tracing the H2?

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Glover, S. C. O.; Klessen, R. S.; Mac Low, M.

    2010-01-01

    Interstellar clouds are commonly observed through the emission of rotational transitions from carbon monoxide (CO). However, the abundance ratio of CO to molecular hydrogen (H2), which is the most abundant molecule in molecular clouds is only about 10-4. This raises the important question of whether the observed CO emission is actually tracing the bulk of the gas in these clouds, and whether it can be used to derive quantities like the total mass of the cloud, the gas density distribution function, the fractal dimension, and the velocity dispersion--size relation. To evaluate the usability and accuracy of CO as a tracer for H2 gas, we generate synthetic observations of hydrodynamical models that include a detailed chemical network to follow the formation and photo-dissociation of H2 and CO. These three-dimensional models of turbulent interstellar cloud formation self-consistently follow the coupled thermal, dynamical and chemical evolution of 32 species, with a particular focus on H2 and CO (Glover et al. 2009). We find that CO primarily traces the dense gas in the clouds, however, with a significant scatter due to turbulent mixing and self-shielding of H2 and CO. The H2 probability distribution function (PDF) is well-described by a log-normal distribution. In contrast, the CO column density PDF has a strongly non-Gaussian low-density wing, not at all consistent with a log-normal distribution. Centroid velocity statistics show that CO is more intermittent than H2, leading to an overestimate of the velocity scaling exponent in the velocity dispersion--size relation. With our systematic comparison of H2 and CO data from the numerical models, we hope to provide a statistical formula to correct for the bias of CO observations. CF acknowledges financial support from a Kade Fellowship of the American Museum of Natural History.

  1. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  2. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  3. The effect of dispersed Petrobaltic oil droplet size on photosynthetically active radiation in marine environment.

    PubMed

    Haule, Kamila; Freda, Włodzimierz

    2016-04-01

    Oil pollution in seawater, primarily visible on sea surface, becomes dispersed as an effect of wave mixing as well as chemical dispersant treatment, and forms spherical oil droplets. In this study, we examined the influence of oil droplet size of highly dispersed Petrobaltic crude on the underwater visible light flux and the inherent optical properties (IOPs) of seawater, including absorption, scattering, backscattering and attenuation coefficients. On the basis of measured data and Mie theory, we calculated the IOPs of dispersed Petrobaltic crude oil in constant concentration, but different log-normal size distributions. We also performed a radiative transfer analysis, in order to evaluate the influence on the downwelling irradiance Ed, remote sensing reflectance Rrs and diffuse reflectance R, using in situ data from the Baltic Sea. We found that during dispersion, there occurs a boundary size distribution characterized by a peak diameter d0  = 0.3 μm causing a maximum E d increase of 40% within 0.5-m depth, and the maximum Ed decrease of 100% at depths below 5 m. Moreover, we showed that the impact of size distribution on the "blue to green" ratios of Rrs and R varies from 24% increase to 27% decrease at the same crude oil concentration.

  4. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  5. Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses

    NASA Technical Reports Server (NTRS)

    Wijers, Ralph A. M. J.; Lubin, Lori M.

    1994-01-01

    We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.

  6. Distribution of runup heights of the December 26, 2004 tsunami in the Indian Ocean

    NASA Astrophysics Data System (ADS)

    Choi, Byung Ho; Hong, Sung Jin; Pelinovsky, Efim

    2006-07-01

    A massive earthquake with magnitude 9.3 occurred on December 26, 2004 off the northern Sumatra generated huge tsunami waves affected many coastal countries in the Indian Ocean. A number of field surveys have been performed after this tsunami event; in particular, several surveys in the south/east coast of India, Andaman and Nicobar Islands, Sri Lanka, Sumatra, Malaysia, and Thailand have been organized by the Korean Society of Coastal and Ocean Engineers from January to August 2005. Spatial distribution of the tsunami runup is used to analyze the distribution function of the wave heights on different coasts. Theoretical interpretation of this distribution is associated with random coastal bathymetry and coastline led to the log-normal functions. Observed data also are in a very good agreement with log-normal distribution confirming the important role of the variable ocean bathymetry in the formation of the irregular wave height distribution along the coasts.

  7. Microfracture spacing distributions and the evolution of fracture patterns in sandstones

    NASA Astrophysics Data System (ADS)

    Hooker, J. N.; Laubach, S. E.; Marrett, R.

    2018-03-01

    Natural fracture patterns in sandstone were sampled using scanning electron microscope-based cathodoluminescence (SEM-CL) imaging. All fractures are opening-mode and are fully or partially sealed by quartz cement. Most sampled fractures are too small to be height-restricted by sedimentary layers. At very low strains (<∼0.001), fracture spatial distributions are indistinguishable from random, whereas at higher strains, fractures are generally statistically clustered. All 12 large (N > 100) datasets show spacings that are best fit by log-normal size distributions, compared to exponential, power law, or normal distributions. The clustering of fractures suggests that the locations of natural factures are not determined by a random process. To investigate natural fracture localization, we reconstructed the opening history of a cluster of fractures within the Huizachal Group in northeastern Mexico, using fluid inclusions from synkinematic cements and thermal-history constraints. The largest fracture, which is the only fracture in the cluster visible to the naked eye, among 101 present, opened relatively late in the sequence. This result suggests that the growth of sets of fractures is a self-organized process, in which small, initially isolated fractures grow and progressively interact, with preferential growth of a subset of fractures developing at the expense of growth of the rest. Size-dependent sealing of fractures within sets suggests that synkinematic cementation may contribute to fracture clustering.

  8. Simple display system of mechanical properties of cells and their dispersion.

    PubMed

    Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun

    2012-01-01

    The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others.

  9. Simple Display System of Mechanical Properties of Cells and Their Dispersion

    PubMed Central

    Shimizu, Yuji; Kihara, Takanori; Haghparast, Seyed Mohammad Ali; Yuba, Shunsuke; Miyake, Jun

    2012-01-01

    The mechanical properties of cells are unique indicators of their states and functions. Though, it is difficult to recognize the degrees of mechanical properties, due to small size of the cell and broad distribution of the mechanical properties. Here, we developed a simple virtual reality system for presenting the mechanical properties of cells and their dispersion using a haptic device and a PC. This system simulates atomic force microscopy (AFM) nanoindentation experiments for floating cells in virtual environments. An operator can virtually position the AFM spherical probe over a round cell with the haptic handle on the PC monitor and feel the force interaction. The Young's modulus of mesenchymal stem cells and HEK293 cells in the floating state was measured by AFM. The distribution of the Young's modulus of these cells was broad, and the distribution complied with a log-normal pattern. To represent the mechanical properties together with the cell variance, we used log-normal distribution-dependent random number determined by the mode and variance values of the Young's modulus of these cells. The represented Young's modulus was determined for each touching event of the probe surface and the cell object, and the haptic device-generating force was calculated using a Hertz model corresponding to the indentation depth and the fixed Young's modulus value. Using this system, we can feel the mechanical properties and their dispersion in each cell type in real time. This system will help us not only recognize the degrees of mechanical properties of diverse cells but also share them with others. PMID:22479595

  10. Workload Characterization and Performance Implications of Large-Scale Blog Servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho

    With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less

  11. Landslides after clearcut logging in a coast redwood forest

    Treesearch

    Leslie M. Reid; Elizabeth T. Keppeler

    2012-01-01

    Landslides have been mapped at least annually in the 473 ha North Fork Caspar Creek watershed since 1985, allowing evaluation of landslide distribution, characteristics, and rates associated with second-entry partial clearcut logging of 1989 to 1992. Comparison of sliding rates in logged and forested areas shows no appreciable difference for streamside slides (size...

  12. Prediction of Cavitation Depth in an Al-Cu Alloy Melt with Bubble Characteristics Based on Synchrotron X-ray Radiography

    NASA Astrophysics Data System (ADS)

    Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode

    2018-06-01

    The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.

  13. Prediction of Cavitation Depth in an Al-Cu Alloy Melt with Bubble Characteristics Based on Synchrotron X-ray Radiography

    NASA Astrophysics Data System (ADS)

    Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode

    2018-04-01

    The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.

  14. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  15. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  16. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  17. Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables▿

    PubMed Central

    Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric

    2007-01-01

    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926

  18. Statistical characteristics of the spatial distribution of territorial contamination by radionuclides from the Chernobyl accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.

    1994-06-01

    The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less

  19. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    NASA Astrophysics Data System (ADS)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  20. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    PubMed

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.

  1. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  2. Study of the Size and Shape of Synapses in the Juvenile Rat Somatosensory Cortex with 3D Electron Microscopy

    PubMed Central

    Rodríguez, José-Rodrigo; DeFelipe, Javier

    2018-01-01

    Abstract Changes in the size of the synaptic junction are thought to have significant functional consequences. We used focused ion beam milling and scanning electron microscopy (FIB/SEM) to obtain stacks of serial sections from the six layers of the rat somatosensory cortex. We have segmented in 3D a large number of synapses (n = 6891) to analyze the size and shape of excitatory (asymmetric) and inhibitory (symmetric) synapses, using dedicated software. This study provided three main findings. Firstly, the mean synaptic sizes were smaller for asymmetric than for symmetric synapses in all cortical layers. In all cases, synaptic junction sizes followed a log-normal distribution. Secondly, most cortical synapses had disc-shaped postsynaptic densities (PSDs; 93%). A few were perforated (4.5%), while a smaller proportion (2.5%) showed a tortuous horseshoe-shaped perimeter. Thirdly, the curvature was larger for symmetric than for asymmetric synapses in all layers. However, there was no correlation between synaptic area and curvature. PMID:29387782

  3. Study of the Size and Shape of Synapses in the Juvenile Rat Somatosensory Cortex with 3D Electron Microscopy.

    PubMed

    Santuy, Andrea; Rodríguez, José-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Angel

    2018-01-01

    Changes in the size of the synaptic junction are thought to have significant functional consequences. We used focused ion beam milling and scanning electron microscopy (FIB/SEM) to obtain stacks of serial sections from the six layers of the rat somatosensory cortex. We have segmented in 3D a large number of synapses ( n = 6891) to analyze the size and shape of excitatory (asymmetric) and inhibitory (symmetric) synapses, using dedicated software. This study provided three main findings. Firstly, the mean synaptic sizes were smaller for asymmetric than for symmetric synapses in all cortical layers. In all cases, synaptic junction sizes followed a log-normal distribution. Secondly, most cortical synapses had disc-shaped postsynaptic densities (PSDs; 93%). A few were perforated (4.5%), while a smaller proportion (2.5%) showed a tortuous horseshoe-shaped perimeter. Thirdly, the curvature was larger for symmetric than for asymmetric synapses in all layers. However, there was no correlation between synaptic area and curvature.

  4. Fractal Structures on Fe3O4 Ferrofluid: A Small-Angle Neutron Scattering Study

    NASA Astrophysics Data System (ADS)

    Giri Rachman Putra, Edy; Seong, Baek Seok; Shin, Eunjoo; Ikram, Abarrul; Ani, Sistin Ari; Darminto

    2010-10-01

    A small-angle neutron scattering (SANS) which is a powerful technique to reveal the large scale structures was applied to investigate the fractal structures of water-based Fe3O4ferrofluid, magnetic fluid. The natural magnetite Fe3O4 from iron sand of several rivers in East Java Province of Indonesia was extracted and purified using magnetic separator. Four different ferrofluid concentrations, i.e. 0.5, 1.0, 2.0 and 3.0 Molar (M) were synthesized through a co-precipitation method and then dispersed in tetramethyl ammonium hydroxide (TMAH) as surfactant. The fractal aggregates in ferrofluid samples were observed from their SANS scattering distributions confirming the correlations to their concentrations. The mass fractal dimension changed from about 3 to 2 as ferrofluid concentration increased showing a deviation slope at intermediate scattering vector q range. The size of primary magnetic particle as a building block was determined by fitting the scattering profiles with a log-normal sphere model calculation. The mean average size of those magnetic particles is about 60 - 100 Å in diameter with a particle size distribution σ = 0.5.

  5. Improvement of Reynolds-Stress and Triple-Product Lag Models

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Lillard, Randolph P.

    2017-01-01

    The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.

  6. 210Po Log-normal distribution in human urines: Survey from Central Italy people

    PubMed Central

    Sisti, D.; Rocchi, M. B. L.; Meli, M. A.; Desideri, D.

    2009-01-01

    The death in London of the former secret service agent Alexander Livtinenko on 23 November 2006 generally attracted the attention of the public to the rather unknown radionuclide 210Po. This paper presents the results of a monitoring programme of 210Po background levels in the urines of noncontaminated people living in Central Italy (near the Republic of S. Marino). The relationship between age, sex, years of smoking, number of cigarettes per day, and 210Po concentration was also studied. The results indicated that the urinary 210Po concentration follows a surprisingly perfect Log-normal distribution. Log 210Po concentrations were positively correlated to age (p < 0.0001), number of daily smoked cigarettes (p = 0.006), and years of smoking (p = 0.021), and associated to sex (p = 0.019). Consequently, this study provides upper reference limits for each sub-group identified by significantly predictive variables. PMID:19750019

  7. A size-frequency study of large Martian craters

    NASA Technical Reports Server (NTRS)

    Woronow, A.

    1975-01-01

    The log normal frequency distribution law was used to analyze the crater population on the surface of Mars. Resulting data show possible evidence for the size frequency evolution of crater producing bodies. Some regions on Mars display excessive depletion of either large or small craters; the most likely causes of the depletion are considered. Apparently, eolian sedimentation has markedly altered the population of the small craters south of -30 deg latitude. The general effects of crater obliteration in the Southern Hemisphere appear to be confined to diameters of less than 20 km. A strong depletion of large craters in a large region just south of Deuteronilus Mensae, and in a small region centered at 35 deg latitude and 10 deg west longitude, may indicate locations of subsurface ice.

  8. A Comparison Study of Summer Season Raindrop Size Distribution Between Palau and Taiwan, Two Islands in Western Pacific

    NASA Astrophysics Data System (ADS)

    Seela, Balaji Kumar; Janapati, Jayalakshmi; Lin, Pay-Liam; Reddy, K. Krishna; Shirooka, Ryuichi; Wang, Pao K.

    2017-11-01

    Raindrop size distribution (RSD) characteristics in summer season rainfall of two observational sites (Taiwan (24°58'N, 121°10'E) and Palau (7°20'N, 134°28'E)) in western Pacific are studied by using five years of impact type disdrometer data. In addition to disdrometer data, Tropical Rainfall Measuring Mission, Moderate Resolution Imaging Spectroradiometer, and ERA-Interim data sets are used to illustrate the dynamical and microphysical characteristics associated with summer season rainfall of Taiwan and Palau. Taiwan and Palau's raindrop spectra showed a significant difference, with a higher concentration of middle and large drops in Taiwan than Palau rainfall. RSD stratified on the basis of rain rate showed a higher mass-weighted mean diameter (Dm) and a lower normalized intercept parameter (log10Nw) in Taiwan than Palau rainfall. Precipitation classification into stratiform and convective regimes showed higher Dm values in Taiwan than Palau. Furthermore, for both the locations, the convective precipitation has a higher Dm value than stratiform precipitation. The radar reflectivity-rain rate relations (Z = A*Rb) of Taiwan and Palau showed a clear variation in the coefficient and a less variation in exponent values. Terrain-influenced clouds extended to higher altitudes over Taiwan resulted with higher Dm and lower log10Nw values as compared to Palau.

  9. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  10. Size distribution of Portuguese firms between 2006 and 2012

    NASA Astrophysics Data System (ADS)

    Pascoal, Rui; Augusto, Mário; Monteiro, A. M.

    2016-09-01

    This study aims to describe the size distribution of Portuguese firms, as measured by annual sales and total assets, between 2006 and 2012, giving an economic interpretation for the evolution of the distribution along the time. Three distributions are fitted to data: the lognormal, the Pareto (and as a particular case Zipf) and the Simplified Canonical Law (SCL). We present the main arguments found in literature to justify the use of distributions and emphasize the interpretation of SCL coefficients. Methods of estimation include Maximum Likelihood, modified Ordinary Least Squares in log-log scale and Nonlinear Least Squares considering the Levenberg-Marquardt algorithm. When applying these approaches to Portuguese's firms data, we analyze if the evolution of estimated parameters in both lognormal power and SCL is in accordance with the known existence of a recession period after 2008. This is confirmed for sales but not for assets, leading to the conclusion that the first variable is a best proxy for firm size.

  11. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  12. Crack surface roughness in three-dimensional random fuse networks

    NASA Astrophysics Data System (ADS)

    Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan

    2006-08-01

    Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.

  13. Are there laws of genome evolution?

    PubMed

    Koonin, Eugene V

    2011-08-01

    Research in quantitative evolutionary genomics and systems biology led to the discovery of several universal regularities connecting genomic and molecular phenomic variables. These universals include the log-normal distribution of the evolutionary rates of orthologous genes; the power law-like distributions of paralogous family size and node degree in various biological networks; the negative correlation between a gene's sequence evolution rate and expression level; and differential scaling of functional classes of genes with genome size. The universals of genome evolution can be accounted for by simple mathematical models similar to those used in statistical physics, such as the birth-death-innovation model. These models do not explicitly incorporate selection; therefore, the observed universal regularities do not appear to be shaped by selection but rather are emergent properties of gene ensembles. Although a complete physical theory of evolutionary biology is inconceivable, the universals of genome evolution might qualify as "laws of evolutionary genomics" in the same sense "law" is understood in modern physics.

  14. Synchrotron quantification of ultrasound cavitation and bubble dynamics in Al-10Cu melts.

    PubMed

    Xu, W W; Tzanakis, I; Srirangam, P; Mirihanage, W U; Eskin, D G; Bodey, A J; Lee, P D

    2016-07-01

    Knowledge of the kinetics of gas bubble formation and evolution under cavitation conditions in molten alloys is important for the control casting defects such as porosity and dissolved hydrogen. Using in situ synchrotron X-ray radiography, we studied the dynamic behaviour of ultrasonic cavitation gas bubbles in a molten Al-10 wt%Cu alloy. The size distribution, average radius and growth rate of cavitation gas bubbles were quantified under an acoustic intensity of 800 W/cm(2) and a maximum acoustic pressure of 4.5 MPa (45 atm). Bubbles exhibited a log-normal size distribution with an average radius of 15.3 ± 0.5 μm. Under applied sonication conditions the growth rate of bubble radius, R(t), followed a power law with a form of R(t)=αt(β), and α=0.0021 &β=0.89. The observed tendencies were discussed in relation to bubble growth mechanisms of Al alloy melts. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Empirical analysis on the connection between power-law distributions and allometries for urban indicators

    NASA Astrophysics Data System (ADS)

    Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.

    2014-09-01

    We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.

  16. Baseline MNREAD Measures for Normally Sighted Subjects From Childhood to Old Age

    PubMed Central

    Calabrèse, Aurélie; Cheong, Allen M. Y.; Cheung, Sing-Hang; He, Yingchen; Kwon, MiYoung; Mansfield, J. Stephen; Subramanian, Ahalya; Yu, Deyue; Legge, Gordon E.

    2016-01-01

    Purpose The continuous-text reading-acuity test MNREAD is designed to measure the reading performance of people with normal and low vision. This test is used to estimate maximum reading speed (MRS), critical print size (CPS), reading acuity (RA), and the reading accessibility index (ACC). Here we report the age dependence of these measures for normally sighted individuals, providing baseline data for MNREAD testing. Methods We analyzed MNREAD data from 645 normally sighted participants ranging in age from 8 to 81 years. The data were collected in several studies conducted by different testers and at different sites in our research program, enabling evaluation of robustness of the test. Results Maximum reading speed and reading accessibility index showed a trilinear dependence on age: first increasing from 8 to 16 years (MRS: 140–200 words per minute [wpm]; ACC: 0.7–1.0); then stabilizing in the range of 16 to 40 years (MRS: 200 ± 25 wpm; ACC: 1.0 ± 0.14); and decreasing to 175 wpm and 0.88 by 81 years. Critical print size was constant from 8 to 23 years (0.08 logMAR), increased slowly until 68 years (0.21 logMAR), and then more rapidly until 81 years (0.34 logMAR). logMAR reading acuity improved from −0.1 at 8 years to −0.18 at 16 years, then gradually worsened to −0.05 at 81 years. Conclusions We found a weak dependence of the MNREAD parameters on age in normal vision. In broad terms, MNREAD performance exhibits differences between three age groups: children 8 to 16 years, young adults 16 to 40 years, and middle-aged to older adults >40 years. PMID:27442222

  17. Resistance distribution in the hopping percolation model.

    PubMed

    Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad

    2005-07-01

    We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.

  18. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  19. Perceptual Learning in Children With Infantile Nystagmus: Effects on Reading Performance.

    PubMed

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2016-08-01

    Perceptual learning improves visual acuity and reduces crowding in children with infantile nystagmus (IN). Here, we compare reading performance of 6- to 11-year-old children with IN with normal controls, and evaluate whether perceptual learning improves their reading. Children with IN were divided in two training groups: a crowded training group (n = 18; albinism: n = 8; idiopathic IN: n = 10) and an uncrowded training group (n = 17; albinism: n = 9; idiopathic IN: n = 8). Also 11 children with normal vision participated. Outcome measures were: reading acuity (the smallest readable font size), maximum reading speed, critical print size (font size below which reading is suboptimal), and acuity reserve (difference between reading acuity and critical print size). We used multiple regression analyses to test if these reading parameters were related to the children's uncrowded distance acuity and/or crowding scores. Reading acuity and critical print size were 0.65 ± 0.04 and 0.69 ± 0.08 log units larger for children with IN than for children with normal vision. Maximum reading speed and acuity reserve did not differ between these groups. After training, reading acuity improved by 0.12 ± 0.02 logMAR and critical print size improved by 0.11 ± 0.04 logMAR in both IN training groups. The changes in reading acuity, critical print size, and acuity reserve of children with IN were tightly related to changes in their uncrowded distance acuity and the changes in magnitude and extent of crowding. Our findings are the first to show that visual acuity is not the only factor that restricts reading in children with IN, but that crowding also limits their reading performance. By targeting both of these spatial bottlenecks in children with IN, our perceptual learning paradigms significantly improved their reading acuity and critical print size. This shows that perceptual learning can effectively transfer to reading.

  20. Image analysis for the automated estimation of clonal growth and its application to the growth of smooth muscle cells.

    PubMed

    Gavino, V C; Milo, G E; Cornwell, D G

    1982-03-01

    Image analysis was used for the automated measurement of colony frequency (f) and colony diameter (d) in cultures of smooth muscle cells, Initial studies with the inverted microscope showed that number of cells (N) in a colony varied directly with d: log N = 1.98 log d - 3.469 Image analysis generated the complement of a cumulative distribution for f as a function of d. The number of cells in each segment of the distribution function was calculated by multiplying f and the average N for the segment. These data were displayed as a cumulative distribution function. The total number of colonies (fT) and the total number of cells (NT) were used to calculate the average colony size (NA). Population doublings (PD) were then expressed as log2 NA. Image analysis confirmed previous studies in which colonies were sized and counted with an inverted microscope. Thus, image analysis is a rapid and automated technique for the measurement of clonal growth.

  1. Influence of overconsolidated condition on permeability evolution in silica sand

    NASA Astrophysics Data System (ADS)

    Kimura, S.; Kaneko, H.; Ito, T.; Nishimura, O.; Minagawa, H.

    2013-12-01

    Permeability of sediments is important factors for production of natural gas from natural gas hydrate bearing layers. Methane-hydrate is regarded as one of the potential resources of natural gas. As results of coring and logging, the existence of a large amount of methane-hydrate is estimated in the Nankai Trough, offshore central Japan, where many folds and faults have been observed. In the present study, we investigate the permeability of silica sand specimen forming the artificial fault zone after large displacement shear in the ring-shear test under two different normal consolidated and overconsolidated conditions. The significant influence of overconsolidation ratio (OCR) on permeability evolution is not found. The permeability reduction is influenced a great deal by the magnitude of normal stress during large displacement shearing. The grain size distribution and structure observation in the shear zone of specimen after shearing at each normal stress level are analyzed by laser scattering type particle analyzer and scanning electron microscope, respectively. It is indicated that the grain size and porosity reduction due to the particle crushing are the factor of the permeability reduction. This study is financially supported by METI and Research Consortium for Methane Hydrate Resources in Japan (the MH21 Research Consortium).

  2. Statistical distributions of ultra-low dose CT sinograms and their fundamental limits

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream

  3. Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.

    PubMed

    Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M

    2016-02-01

    Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.

  4. Detrital illite crystals identified from crystallite thickness measurements in siliciclastic sediments

    USGS Publications Warehouse

    Aldega, L.; Eberl, D.D.

    2005-01-01

    Illite crystals in siliciclastic sediments are heterogeneous assemblages of detrital material coming from various source rocks and, at paleotemperatures >70 ??C, of superimposed diagenetic modification in the parent sediment. We distinguished the relative proportions of 2M1 detrital illite and possible diagenetic 1Md + 1M illite by a combined analysis of crystal-size distribution and illite polytype quantification. We found that the proportions of 1Md + 1M and 2M1 illite could be determined from crystallite thickness measurements (BWA method, using the MudMaster program) by unmixing measured crystallite thickness distributions using theoretical and calculated log-normal and/or asymptotic distributions. The end-member components that we used to unmix the measured distributions were three asymptotic-shaped distributions (assumed to be the diagenetic component of the mixture, the 1Md + 1M polytypes) calculated using the Galoper program (Phase A was simulated using 500 crystals per cycle of nucleation and growth, Phase B = 333/cycle, and Phase C = 250/ cycle), and one theoretical log-normal distribution (Phase D, assumed to approximate the detrital 2M1 component of the mixture). In addition, quantitative polytype analysis was carried out using the RockJock software for comparison. The two techniques gave comparable results (r2 = 0.93), which indicates that the unmixing method permits one to calculate the proportion of illite polytypes and, therefore, the proportion of 2M1 detrital illite, from crystallite thickness measurements. The overall illite crystallite thicknesses in the samples were found to be a function of the relative proportions of thick 2M1 and thin 1Md + 1M illite. The percentage of illite layers in I-S mixed layers correlates with the mean crystallite thickness of the 1Md + 1M polytypes, indicating that these polytypes, rather than the 2M1 polytype, participate in I-S mixed layering.

  5. Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar

    NASA Technical Reports Server (NTRS)

    Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.

  6. A Statistical Method for Estimating Missing GHG Emissions in Bottom-Up Inventories: The Case of Fossil Fuel Combustion in Industry in the Bogota Region, Colombia

    NASA Astrophysics Data System (ADS)

    Jimenez-Pizarro, R.; Rojas, A. M.; Pulido-Guio, A. D.

    2012-12-01

    The development of environmentally, socially and financially suitable greenhouse gas (GHG) mitigation portfolios requires detailed disaggregation of emissions by activity sector, preferably at the regional level. Bottom-up (BU) emission inventories are intrinsically disaggregated, but although detailed, they are frequently incomplete. Missing and erroneous activity data are rather common in emission inventories of GHG, criteria and toxic pollutants, even in developed countries. The fraction of missing and erroneous data can be rather large in developing country inventories. In addition, the cost and time for obtaining or correcting this information can be prohibitive or can delay the inventory development. This is particularly true for regional BU inventories in the developing world. Moreover, a rather common practice is to disregard or to arbitrarily impute low default activity or emission values to missing data, which typically leads to significant underestimation of the total emissions. Our investigation focuses on GHG emissions by fossil fuel combustion in industry in the Bogota Region, composed by Bogota and its adjacent, semi-rural area of influence, the Province of Cundinamarca. We found that the BU inventories for this sub-category substantially underestimate emissions when compared to top-down (TD) estimations based on sub-sector specific national fuel consumption data and regional energy intensities. Although both BU inventories have a substantial number of missing and evidently erroneous entries, i.e. information on fuel consumption per combustion unit per company, the validated energy use and emission data display clear and smooth frequency distributions, which can be adequately fitted to bimodal log-normal distributions. This is not unexpected as industrial plant sizes are typically log-normally distributed. Moreover, our statistical tests suggest that industrial sub-sectors, as classified by the International Standard Industrial Classification (ISIC), are also well represented by log-normal distributions. Using the validated data, we tested several missing data estimation procedures, including Montecarlo sampling of the real and fitted distributions, and a per ISIC estimation based on bootstrap-calculated mean values. These results will be presented and discussed in detail. Our results suggest that the accuracy of sub-sector BU emission inventories, particularly in developing regions, could be significantly improved if they are designed and carried out to be representative sub-samples (surveys) of the actual universe of emitters. A large fraction the missing data could be subsequently estimated by robust statistical procedures provided that most of the emitters were accounted by number and ISIC.

  7. Simulations of large acoustic scintillations in the straits of Florida.

    PubMed

    Tang, Xin; Tappert, F D; Creamer, Dennis B

    2006-12-01

    Using a full-wave acoustic model, Monte Carlo numerical studies of intensity fluctuations in a realistic shallow water environment that simulates the Straits of Florida, including internal wave fluctuations and bottom roughness, have been performed. Results show that the sound intensity at distant receivers scintillates dramatically. The acoustic scintillation index SI increases rapidly with propagation range and is significantly greater than unity at ranges beyond about 10 km. This result supports a theoretical prediction by one of the authors. Statistical analyses show that the distribution of intensity of the random wave field saturates to the expected Rayleigh distribution with SI= 1 at short range due to multipath interference effects, and then SI continues to increase to large values. This effect, which is denoted supersaturation, is universal at long ranges in waveguides having lossy boundaries (where there is differential mode attenuation). The intensity distribution approaches a log-normal distribution to an excellent approximation; it may not be a universal distribution and comparison is also made to a K distribution. The long tails of the log-normal distribution cause "acoustic intermittency" in which very high, but rare, intensities occur.

  8. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  9. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. Footprint area analysis of binary imaged Cupriavidus necator cells to study PHB production at balanced, transient, and limited growth conditions in a cascade process.

    PubMed

    Vadlja, Denis; Koller, Martin; Novak, Mario; Braunegg, Gerhart; Horvat, Predrag

    2016-12-01

    Statistical distribution of cell and poly[3-(R)-hydroxybutyrate] (PHB) granule size and number of granules per cell are investigated for PHB production in a five-stage cascade (5CSTR). Electron microscopic pictures of cells from individual cascade stages (R1-R5) were converted to binary pictures to visualize footprint areas for polyhydroxyalkanoate (PHA) and non-PHA biomass. Results for each stage were correlated to the corresponding experimentally determined kinetics (specific growth rate μ and specific productivity π). Log-normal distribution describes PHA granule size dissimilarity, whereas for R1 and R4, gamma distribution best reflects the situation. R1, devoted to balanced biomass synthesis, predominately contains cells with rather small granules, whereas with increasing residence time τ, maximum and average granule sizes by trend increase, approaching an upper limit determined by the cell's geometry. Generally, an increase of intracellular PHA content and ratio of granule to cell area slow down along the cascade. Further, the number of granules per cell decreases with increasing τ. Data for μ and π obtained by binary picture analysis correlate well with the experimental results. The work describes long-term continuous PHA production under balanced, transient, and nutrient-deficient conditions, as well as their reflection on the granules size, granule number, and cell structure on the microscopic level.

  11. Localized massive halo properties in BAHAMAS and MACSIS simulations: scalings, log-normality, and covariance

    NASA Astrophysics Data System (ADS)

    Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.

    2018-05-01

    Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.

  12. Cumulative slant path rain attenuation associated with COMSTAR beacon at 28.56 GHz for Wallops Island, Virginia

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1978-01-01

    Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.

  13. A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.

    PubMed

    Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F

    2016-01-01

    Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed proportion of deaths under the alternate hypothesis compared with the adaptive designs with no stopping rules. Such Bayesian hybrid adaptive survival trials may be promising alternatives to traditional designs, reducing the duration of survival trials, as well as optimizing the ethical concerns for patients enrolled in the trial.

  14. An "ASYMPTOTIC FRACTAL" Approach to the Morphology of Malignant Cell Nuclei

    NASA Astrophysics Data System (ADS)

    Landini, Gabriel; Rippin, John W.

    To investigate quantitatively nuclear membrane irregularity, 672 nuclei from 10 cases of oral cancer (squamous cell carcinoma) and normal cells from oral mucosa were studied in transmission electron micrographs. The nuclei were photographed at ×1400 magnification and transferred to computer memory (1 pixel = 35 nm). The perimeter of the profiles was analysed using the "yardstick method" of fractal dimension estimation, and the log-log plot of ruler size vs. boundary length demonstrated that there exists a significant effect of resolution on length measurement. However, this effect seems to disappear at higher resolutions. As this observation is compatible with the concept of asymptotic fractal, we estimated the parameters c, L and Bm from the asymptotic fractal formula Br = Bm {1 + (r / L)c}-1 , where Br is the boundary length measured with a ruler of size r, Bm is the maximum boundary for r → 0, L is a constant, and c = asymptotic fractal dimension minus topological dimension (D - Dt) for r → ∞. Analyses of variance showed c to be significantly higher in the normal than malignant cases (P < 0.001), but log(L) and Bm to be significantly higher in the malignant cases (P < 0.001). A multivariate linear discrimination analysis on c, log(L) and Bm re-classified 76.6% of the cells correctly (84.8% of the normal and 67.5% of the tumor). Furthermore, this shows that asymptotic fractal analysis applied to nuclear profiles has great potential for shape quantification in diagnosis of oral cancer.

  15. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  16. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  17. Flame surface statistics of constant-pressure turbulent expanding premixed flames

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2014-04-01

    In this paper we investigate the local flame surface statistics of constant-pressure turbulent expanding flames. First the statistics of local length ratio is experimentally determined from high-speed planar Mie scattering images of spherically expanding flames, with the length ratio on the measurement plane, at predefined equiangular sectors, defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we then convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at the corresponding area-ratio pdfs. It is found that both the length ratio and area ratio pdfs are near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis.

  18. Detection of Person Misfit in Computerized Adaptive Tests with Polytomous Items.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    2002-01-01

    Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…

  19. Investigation of the milling capabilities of the F10 Fine Grind mill using Box-Behnken designs.

    PubMed

    Tan, Bernice Mei Jin; Tay, Justin Yong Soon; Wong, Poh Mun; Chan, Lai Wah; Heng, Paul Wan Sia

    2015-01-01

    Size reduction or milling of the active is often the first processing step in the design of a dosage form. The ability of a mill to convert coarse crystals into the target size and size distribution efficiently is highly desirable as the quality of the final pharmaceutical product after processing is often still dependent on the dimensional attributes of its component constituents. The F10 Fine Grind mill is a mechanical impact mill designed to produce unimodal mid-size particles by utilizing a single-pass two-stage size reduction process for fine grinding of raw materials needed in secondary processing. Box-Behnken designs were used to investigate the effects of various mill variables (impeller, blower and feeder speeds and screen aperture size) on the milling of coarse crystals. Response variables included the particle size parameters (D10, D50 and D90), span and milling rate. Milled particles in the size range of 5-200 μm, with D50 ranging from 15 to 60 μm, were produced. The impeller and feeder speeds were the most critical factors influencing the particle size and milling rate, respectively. Size distributions of milled particles were better described by their goodness-of-fit to a log-normal distribution (i.e. unimodality) rather than span. Milled particles with symmetrical unimodal distributions were obtained when the screen aperture size was close to the median diameter of coarse particles employed. The capacity for high throughput milling of particles to a mid-size range, which is intermediate between conventional mechanical impact mills and air jet mills, was demonstrated in the F10 mill. Prediction models from the Box-Behnken designs will aid in providing a better guide to the milling process and milled product characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. A Bayesian Surrogate for Regional Skew in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-06-01

    The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.

  1. Consequence of reputation in the Sznajd consensus model

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Forgerini, Fabricio L.

    2010-07-01

    In this work we study a modified version of the Sznajd sociophysics model. In particular we introduce reputation, a mechanism that limits the capacity of persuasion of the agents. The reputation is introduced as a score which is time-dependent, and its introduction avoid dictatorship (all spins parallel) for a wide range of parameters. The relaxation time follows a log-normal-like distribution. In addition, we show that the usual phase transition also occurs, as in the standard model, and it depends on the initial concentration of individuals following an opinion, occurring at a initial density of up spins greater than 1/2. The transition point is determined by means of a finite-size scaling analysis.

  2. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  3. Determining size-specific emission factors for environmental tobacco smoke particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klepeis, Neil E.; Apte, Michael G.; Gundel, Lara A.

    Because size is a major controlling factor for indoor airborne particle behavior, human particle exposure assessments will benefit from improved knowledge of size-specific particle emissions. We report a method of inferring size-specific mass emission factors for indoor sources that makes use of an indoor aerosol dynamics model, measured particle concentration time series data, and an optimization routine. This approach provides--in addition to estimates of the emissions size distribution and integrated emission factors--estimates of deposition rate, an enhanced understanding of particle dynamics, and information about model performance. We applied the method to size-specific environmental tobacco smoke (ETS) particle concentrations measured everymore » minute with an 8-channel optical particle counter (PMS-LASAIR; 0.1-2+ micrometer diameters) and every 10 or 30 min with a 34-channel differential mobility particle sizer (TSI-DMPS; 0.01-1+ micrometer diameters) after a single cigarette or cigar was machine-smoked inside a low air-exchange-rate 20 m{sup 3} chamber. The aerosol dynamics model provided good fits to observed concentrations when using optimized values of mass emission rate and deposition rate for each particle size range as input. Small discrepancies observed in the first 1-2 hours after smoking are likely due to the effect of particle evaporation, a process neglected by the model. Size-specific ETS particle emission factors were fit with log-normal distributions, yielding an average mass median diameter of 0.2 micrometers and an average geometric standard deviation of 2.3 with no systematic differences between cigars and cigarettes. The equivalent total particle emission rate, obtained integrating each size distribution, was 0.2-0.7 mg/min for cigars and 0.7-0.9 mg/min for cigarettes.« less

  4. SOLVE The performance analyst for hardwood sawmills

    Treesearch

    Jeff Palmer; Jan Wiedenbeck; Elizabeth Porterfield

    2009-01-01

    Presents the users manual and CD-ROM for SOLVE, a computer program that helps sawmill managers improve efficiency and solve problems commonly found in hardwood sawmills. SOLVE provides information on key operational factors including log size distribution, lumber grade yields, lumber recovery factor and overrun, and break-even log costs. (Microsoft Windows? Edition)...

  5. Managing logging residue under the timber sale contract.

    Treesearch

    Thomas C. Adams

    1980-01-01

    Management of logging residue is becoming an important part of timber sale planning. This involves controlling the amount of residue remaining on the ground and its distribution by diameter size class. Some residue is beneficial.An interdisciplinary team specified a desired residue level for one clearcutting unit of this trial. For comparison another cutting...

  6. Size distributions of polycyclic aromatic hydrocarbons in urban atmosphere: sorption mechanism and source contributions to respiratory deposition

    NASA Astrophysics Data System (ADS)

    Lv, Yan; Li, Xiang; Xu, Ting Ting; Cheng, Tian Tao; Yang, Xin; Chen, Jian Min; Iinuma, Yoshiteru; Herrmann, Hartmut

    2016-03-01

    In order to better understand the particle size distribution of polycyclic aromatic hydrocarbons (PAHs) and their source contribution to human respiratory system, size-resolved PAHs have been studied in ambient aerosols at a megacity Shanghai site during a 1-year period (2012-2013). The results showed the PAHs had a bimodal distribution with one mode peak in the fine-particle size range (0.4-2.1 µm) and another mode peak in the coarse-particle size range (3.3-9.0 µm). Along with the increase in ring number of PAHs, the intensity of the fine-mode peak increased, while the coarse-mode peak decreased. Plotting of log(PAH / PM) against log(Dp) showed that all slope values were above -1, suggesting that multiple mechanisms (adsorption and absorption) controlled the particle size distribution of PAHs. The total deposition flux of PAHs in the respiratory tract was calculated as being 8.8 ± 2.0 ng h-1. The highest lifetime cancer risk (LCR) was estimated at 1.5 × 10-6, which exceeded the unit risk of 10-6. The LCR values presented here were mainly influenced by accumulation mode PAHs which came from biomass burning (24 %), coal combustion (25 %), and vehicular emission (27 %). The present study provides us with a mechanistic understanding of the particle size distribution of PAHs and their transport in the human respiratory system, which can help develop better source control strategies.

  7. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    PubMed

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Percent area coverage through image analysis

    NASA Astrophysics Data System (ADS)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  9. Optical and Gravimetric Partitioning of Coastal Ocean Suspended Particulate Inorganic Matter (PIM)

    NASA Astrophysics Data System (ADS)

    Stavn, R. H.; Zhang, X.; Falster, A. U.; Gray, D. J.; Rick, J. J.; Gould, R. W., Jr.

    2016-02-01

    Recent work on the composition of suspended particulates of estuarine and coastal waters increases our capabilities to investigate the biogeochemal processes occurring in these waters. The biogeochemical properties associated with the particulates involve primarily sorption/desorption of dissolved matter onto the particle surfaces, which vary with the types of particulates. Therefore, the breakdown into chemical components of suspended matter will greatly expand the biogeochemistry of the coastal ocean region. The gravimetric techniques for these studies are here expanded and refined. In addition, new optical inversions greatly expand our capabilities to study spatial extent of the components of suspended particulate matter. The partitioning of a gravimetric PIM determination into clay minerals and amorphous silica is aided by electron microprobe analysis. The amorphous silica is further partitioned into contributions by detrital material and by the tests of living diatoms based on an empirical formula relating the chlorophyll content of cultured living diatoms in log phase growth to their frustules determined after gravimetric analysis of the ashed diatom residue. The optical inversion of composition of suspended particulates is based on the entire volume scattering function (VSF) measured in the field with a Multispectral Volume Scattering Meter and a LISST 100 meter. The VSF is partitioned into an optimal combination of contributions by particle subpopulations, each of which is uniquely represented by a refractive index and a log-normal size distribution. These subpopulations are aggregated to represent the two components of PIM using the corresponding refractive indices and sizes which also yield a particle size distribution for the two components. The gravimetric results of partitioning PIM into clay minerals and amorphous silica confirm the optical inversions from the VSF.

  10. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  11. Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.

    PubMed

    Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent

    2018-02-01

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Narrow log-periodic modulations in non-Markovian random walks

    NASA Astrophysics Data System (ADS)

    Diniz, R. M. B.; Cressoni, J. C.; da Silva, M. A. A.; Mariz, A. M.; de Araújo, J. M.

    2017-12-01

    What are the necessary ingredients for log-periodicity to appear in the dynamics of a random walk model? Can they be subtle enough to be overlooked? Previous studies suggest that long-range damaged memory and negative feedback together are necessary conditions for the emergence of log-periodic oscillations. The role of negative feedback would then be crucial, forcing the system to change direction. In this paper we show that small-amplitude log-periodic oscillations can emerge when the system is driven by positive feedback. Due to their very small amplitude, these oscillations can easily be mistaken for numerical finite-size effects. The models we use consist of discrete-time random walks with strong memory correlations where the decision process is taken from memory profiles based either on a binomial distribution or on a delta distribution. Anomalous superdiffusive behavior and log-periodic modulations are shown to arise in the large time limit for convenient choices of the models parameters.

  13. Survey of Large Methane Emitters in North America

    NASA Astrophysics Data System (ADS)

    Deiker, S.

    2017-12-01

    It has been theorized that methane emissions in the oil and gas industry follow log normal or "fat tail" distributions, with large numbers of small sources for every very large source. Such distributions would have significant policy and operational implications. Unfortunately, by their very nature such distributions would require large sample sizes to verify. Until recently, such large-scale studies would be prohibitively expensive. The largest public study to date sampled 450 wells, an order of magnitude too low to effectively constrain these models. During 2016 and 2017, Kairos Aerospace conducted a series of surveys the LeakSurveyor imaging spectrometer, mounted on light aircraft. This small, lightweight instrument was designed to rapidly locate large emission sources. The resulting survey covers over three million acres of oil and gas production. This includes over 100,000 wells, thousands of storage tanks and over 7,500 miles of gathering lines. This data set allows us to now probe the distribution of large methane emitters. Results of this survey, and implications for methane emission distribution, methane policy and LDAR will be discussed.

  14. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Integrated NMR Core and Log Investigations With Respect to ODP LEG 204

    NASA Astrophysics Data System (ADS)

    Arnold, J.; Pechnig, R.; Clauser, C.; Anferova, S.; Blümich, B.

    2005-12-01

    NMR techniques are widely used in the oil industry and are one of the most suitable methods to evaluate in-situ formation porosity and permeability. Recently, efforts are directed towards adapting NMR methods also to the Ocean Drilling Program (ODP) and the upcoming Integrated Ocean Drilling Program (IODP). We apply a newly developed light-weight, mobile NMR core scanner as a non-destructive instrument to determine routinely rock porosity and to estimate the pore size distribution. The NMR core scanner is used for transverse relaxation measurements on water-saturated core sections using a CPMG sequence with a short echo time. A regularized Laplace-transform analysis yields the distribution of transverse relaxation times T2. In homogeneous magnetic fields, T2 is proportional to the pore diameter of rocks. Hence, the T2 signal maps the pore-size distribution of the studied rock samples. For fully saturated samples the integral of the distribution curve and the CPMG echo amplitude extrapolated to zero echo time are proportional to porosity. Preliminary results show that the NMR core scanner is a suitable tool to determine rock porosity and to estimate pore size distribution of limestones and sandstones. Presently our investigations focus on Leg 204, where NMR Logging-While-Drilling (LWD) was performed for the first time in ODP. Leg 204 was drilled into Hydrate Ridge on the Cascadia accretionary margin, offshore Oregon. All drilling and logging operations were highly successful, providing excellent core, wireline, and LWD data from adjacent boreholes. Cores recovered during Leg 204 consist mainly of clay and claystone. As the NMR core scanner operates at frequencies higher than that of the well-logging sensor it has a shorter dead time. This advantage makes the NMR core scanner sensitive to signals with T2 values down to 0.1 ms as compared to 3 ms in NMR logging. Hence, we can study even rocks with small pores, such as the mudcores recovered during Leg 204. We present a comparison of data from core scanning and NMR logging. Future integration of conventional wireline data and electrical borehole wall images (RAB/FMS) will provide a detailed characterization of the sediments in terms of lithology, petrophysics and, fluid flow properties.

  16. Effects of seed predators of different body size on seed mortality in Bornean logged forest.

    PubMed

    Hautier, Yann; Saner, Philippe; Philipson, Christopher; Bagchi, Robert; Ong, Robert C; Hector, Andy

    2010-07-19

    The Janzen-Connell hypothesis proposes that seed and seedling enemies play a major role in maintaining high levels of tree diversity in tropical forests. However, human disturbance may alter guilds of seed predators including their body size distribution. These changes have the potential to affect seedling survival in logged forest and may alter forest composition and diversity. We manipulated seed density in plots beneath con- and heterospecific adult trees within a logged forest and excluded vertebrate predators of different body sizes using cages. We show that small and large-bodied predators differed in their effect on con- and heterospecific seedling mortality. In combination small and large-bodied predators dramatically decreased both con- and heterospecific seedling survival. In contrast, when larger-bodied predators were excluded small-bodied predators reduced conspecific seed survival leaving seeds coming from the distant tree of a different species. Our results suggest that seed survival is affected differently by vertebrate predators according to their body size. Therefore, changes in the body size structure of the seed predator community in logged forests may change patterns of seed mortality and potentially affect recruitment and community composition.

  17. Effects of Seed Predators of Different Body Size on Seed Mortality in Bornean Logged Forest

    PubMed Central

    Hautier, Yann; Saner, Philippe; Philipson, Christopher; Bagchi, Robert; Ong, Robert C.; Hector, Andy

    2010-01-01

    Background The Janzen-Connell hypothesis proposes that seed and seedling enemies play a major role in maintaining high levels of tree diversity in tropical forests. However, human disturbance may alter guilds of seed predators including their body size distribution. These changes have the potential to affect seedling survival in logged forest and may alter forest composition and diversity. Methodology/Principal Findings We manipulated seed density in plots beneath con- and heterospecific adult trees within a logged forest and excluded vertebrate predators of different body sizes using cages. We show that small and large-bodied predators differed in their effect on con- and heterospecific seedling mortality. In combination small and large-bodied predators dramatically decreased both con- and heterospecific seedling survival. In contrast, when larger-bodied predators were excluded small-bodied predators reduced conspecific seed survival leaving seeds coming from the distant tree of a different species. Conclusions/Significance Our results suggest that seed survival is affected differently by vertebrate predators according to their body size. Therefore, changes in the body size structure of the seed predator community in logged forests may change patterns of seed mortality and potentially affect recruitment and community composition. PMID:20657841

  18. Characteristics of factory-grade hardwood logs delivered to Appalachian sawmills

    Treesearch

    Curtis D. Goho; Paul S. Wysor; Paul S. Wysor

    1970-01-01

    Until now, information about the characteristics of sawlogs delivered to Appalachian sawmills has been generally unavailable. We know what the standing timber is like, from forest-survey data. But this paper covers a different spectrum: the frequency distributions-by size, grade, volume, and species group-of factory-grade logs actually harvested and delivered to the...

  19. Zipf law: an extreme perspective

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2016-04-01

    Extreme value theory (EVT) asserts that the Fréchet law emerges universally from linearly scaled maxima of collections of independent and identically distributed random variables that are positive-valued. Observations of many real-world sizes, e.g. city-sizes, give rise to the Zipf law: if we rank the sizes decreasingly, and plot the log-sizes versus the log-ranks, then an affine line emerges. In this paper we present an EVT approach to the Zipf law. Specifically, we establish that whenever the Fréchet law emerges from the EVT setting, then the Zipf law follows. The EVT generation of the Zipf law, its universality, and its associated phase transition, are analyzed and described in detail.

  20. Measurement, Modeling, and Analysis of a Large-scale Blog Sever Workload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Myeongjae; Hwang, Jeaho; Kim, Youngjae

    2010-01-01

    Despite the growing popularity of Online Social Networks (OSNs), the workload characteristics of OSN servers, such as those hosting blog services, are not well understood. Understanding workload characteristics is important for opti- mizing and improving the performance of current systems and software based on observed trends. Thus, in this paper, we characterize the system workload of the largest blog hosting servers in South Korea, Tistory1. In addition to understanding the system workload of the blog hosting server, we have developed synthesized workloads and obtained the following major findings: (i) the transfer size of non-multimedia files and blog articles can bemore » modeled by a truncated Pareto distribution and a log-normal distribution respectively, and (ii) users accesses to blog articles do not show temporal locality, but they are strongly biased toward those posted along with images or audio.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murase, Kenya, E-mail: murase@sahs.med.osaka-u.ac.jp; Song, Ruixiao; Hiratsuka, Samu

    We investigated the feasibility of visualizing blood coagulation using a system for magnetic particle imaging (MPI). A magnetic field-free line is generated using two opposing neodymium magnets and transverse images are reconstructed from the third-harmonic signals received by a gradiometer coil, using the maximum likelihood-expectation maximization algorithm. Our MPI system was used to image the blood coagulation induced by adding CaCl{sub 2} to whole sheep blood mixed with magnetic nanoparticles (MNPs). The “MPI value” was defined as the pixel value of the transverse image reconstructed from the third-harmonic signals. MPI values were significantly smaller for coagulated blood samples than thosemore » without coagulation. We confirmed the rationale of these results by calculating the third-harmonic signals for the measured viscosities of samples, with an assumption that the magnetization and particle size distribution of MNPs obey the Langevin equation and log-normal distribution, respectively. We concluded that MPI can be useful for visualizing blood coagulation.« less

  2. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  3. Channel characterization and empirical model for ergodic capacity of free-space optical communication link

    NASA Astrophysics Data System (ADS)

    Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António

    2017-05-01

    In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).

  4. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  5. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  6. The partitioning behavior of persistent toxicant organic contaminants in eutrophic sediments: Coefficients and effects of fluorescent organic matter and particle size.

    PubMed

    He, Wei; Yang, Chen; Liu, Wenxiu; He, Qishuang; Wang, Qingmei; Li, Yilong; Kong, Xiangzhen; Lan, Xinyu; Xu, Fuliu

    2016-12-01

    In the shallow lakes, the partitioning of organic contaminants into the water phase from the solid phase might pose a potential hazard to both benthic and planktonic organisms, which would further damage aquatic ecosystems. This study determined the concentrations of polycyclic aromatic hydrocarbons (PAHs), organochlorine pesticides (OCPs), and phthalate esters (PAEs) in both the sediment and the pore water from Lake Chaohu and calculated the sediment - pore water partition coefficient (K D ) and the organic carbon normalized sediment - pore water partition coefficient (K OC ), and explored the effects of particle size, organic matter content, and parallel factor fluorescent organic matter (PARAFAC-FOM) on K D . The results showed that log K D values of PAHs (2.61-3.94) and OCPs (1.75-3.05) were significantly lower than that of PAEs (4.13-5.05) (p < 0.05). The chemicals were ranked by log K OC as follows: PAEs (6.05-6.94) > PAHs (4.61-5.86) > OCPs (3.62-4.97). A modified MCI model can predict K OC values in a range of log 1.5 at a higher frequency, especially for PAEs. The significantly positive correlation between K OC and the octanol - water partition coefficient (K OW ) were observed for PAHs and OCPs. However, significant correlation was found for PAEs only when excluding PAEs with lower K OW . Sediments with smaller particle sizes (clay and silt) and their organic matter would affect distributions of PAHs and OCPs between the sediment and the pore water. Protein-like fluorescent organic matter (C2) was associated with the K D of PAEs. Furthermore, the partitioning of PARAFAC-FOM between the sediment and the pore water could potentially affect the distribution of organic pollutants. The partitioning mechanism of PAEs between the sediment and the pore water might be different from that of PAHs and OCPs, as indicated by their associations with influencing factors and K OW . Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Time Dependence of Aerosol Light Scattering Downwind of Forest Fires

    NASA Astrophysics Data System (ADS)

    Kleinman, L. I.; Sedlacek, A. J., III; Wang, J.; Lewis, E. R.; Springston, S. R.; Chand, D.; Shilling, J.; Arnott, W. P.; Freedman, A.; Onasch, T. B.; Fortner, E.; Zhang, Q.; Yokelson, R. J.; Adachi, K.; Buseck, P. R.

    2017-12-01

    In the first phase of BBOP (Biomass Burn Observation Project), a Department of Energy (DOE) sponsored study, wildland fires in the Pacific Northwest were sampled from the G-1 aircraft via sequences of transects that encountered emission whose age (time since emission) ranged from approximately 15 minutes to four hours. Comparisons between transects allowed us to determine the near-field time evolution of trace gases, aerosol particles, and optical properties. The fractional increase in aerosol concentration with plume age was typically less than a third of the fractional increase in light scattering. In some fires the increase in light scattering exceeded a factor of two. Two possible causes for the discrepancy between scattering and aerosol mass are i) the downwind formation of refractory tar balls that are not detected by the AMS and therefore contribute to scattering but not to aerosol mass and ii) changes to the aerosol size distribution. Both possibilities are considered. Our information on tar balls comes from an analysis of TEM grids. A direct determination of size changes is complicated by extremely high aerosol number concentrations that caused coincidence problems for the PCASP and UHSAS probes. We instead construct a set of plausible log normal size distributions and for each member of the set do Mie calculations to determine mass scattering efficiency (MSE), angstrom exponents, and backscatter ratios. Best fit size distributions are selected by comparison with observed data derived from multi-wavelength scattering measurements, an extrapolated FIMS size distribution, and mass measurements from an SP-AMS. MSE at 550 nm varies from a typical near source value of 2-3 to about 4 in aged air.

  8. The emergence of different tail exponents in the distributions of firm size variables

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki

    2013-05-01

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

  9. Evaluation of Low-Gravity Smoke Particulate for Spacecraft Fire Detection

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Mulholland George; Meyer, Marit; Yuan, Zeng guang; Cleary, Thomas; Yang, Jiann; Greenberg, Paul; Bryg, Victoria

    2013-01-01

    Tests were conducted on the International Space Station to evaluate the smoke particulate size from materials and conditions that are typical of those expected in spacecraft fires. Five different materials representative of those found in spacecraft (Teflon, Kapton, cotton, silicone rubber and Pyrell) were heated to temperatures below the ignition point with conditions controlled to provide repeatable sample surface temperatures and air flow. The air flow past the sample during the heating period ranged from quiescent to 8 cm/s. The effective transport time to the measurement instruments was varied from 11 to 800 seconds to simulate different smoke transport conditions in spacecraft. The resultant aerosol was evaluated by three instruments which measured different moments of the particle size distribution. These moment diagnostics were used to determine the particle number concentration (zeroth moment), the diameter concentration (first moment), and the mass concentration (third moment). These statistics were combined to determine the diameter of average mass and the count mean diameter and by assuming a log-normal distribution, the geometric mean diameter and the geometric standard deviations were also calculated. Smoke particle samples were collected on TEM grids using a thermal precipitator for post flight analysis. The TEM grids were analyzed to determine the particle morphology and shape parameters. The different materials produced particles with significantly different morphologies. Overall the majority of the average smoke particle sizes were found to be in the 200 to 400 nanometer range with the quiescent cases and the cases with increased transport time typically producing with substantially larger particles. The results varied between materials but the smoke particles produced in low gravity were typically twice the size of particles produced in normal gravity. These results can be used to establish design requirements for future spacecraft smoke detectors.

  10. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    NASA Astrophysics Data System (ADS)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  11. Reallocation in modal aerosol models: impacts on predicting aerosol radiative effects

    NASA Astrophysics Data System (ADS)

    Korhola, T.; Kokkola, H.; Korhonen, H.; Partanen, A.-I.; Laaksonen, A.; Lehtinen, K. E. J.; Romakkaniemi, S.

    2013-08-01

    In atmospheric modelling applications the aerosol particle size distribution is commonly represented by modal approach, in which particles in different size ranges are described with log-normal modes within predetermined size ranges. Such method includes numerical reallocation of particles from a mode to another for example during particle growth, leading to potentially artificial changes in the aerosol size distribution. In this study we analysed how this reallocation affects climatologically relevant parameters: cloud droplet number concentration, aerosol-cloud interaction coefficient and light extinction coefficient. We compared these parameters between a modal model with and without reallocation routines, and a high resolution sectional model that was considered as a reference model. We analysed the relative differences of the parameters in different experiments that were designed to cover a wide range of dynamic aerosol processes occurring in the atmosphere. According to our results, limiting the allowed size ranges of the modes and the following numerical remapping of the distribution by reallocation, leads on average to underestimation of cloud droplet number concentration (up to 100%) and overestimation of light extinction (up to 20%). The analysis of aerosol first indirect effect is more complicated as the ACI parameter can be either over- or underestimated by the reallocating model, depending on the conditions. However, for example in the case of atmospheric new particle formation events followed by rapid particle growth, the reallocation can cause around average 10% overestimation of the ACI parameter. Thus it is shown that the reallocation affects the ability of a model to estimate aerosol climate effects accurately, and this should be taken into account when using and developing aerosol models.

  12. Impact of particle size on distribution and human exposure of flame retardants in indoor dust.

    PubMed

    He, Rui-Wen; Li, Yun-Zi; Xiang, Ping; Li, Chao; Cui, Xin-Yi; Ma, Lena Q

    2018-04-01

    The effect of dust particle size on the distribution and bioaccessibility of flame retardants (FRs) in indoor dust remains unclear. In this study, we analyzed 20 FRs (including 6 organophosphate flame retardants (OPFRs), 8 polybrominated diphenyl ethers (PBDEs), 4 novel brominated flame retardants (NBFRs), and 2 dechlorane plus (DPs)) in composite dust samples from offices, public microenvironments (PME), and cars in Nanjing, China. Each composite sample (one per microenvironment) was separated into 6 size fractions (F1-F6: 200-2000µm, 150-200µm, 100-150µm, 63-100µm, 43-63µm, and <43µm). FRs concentrations were the highest in car dust, being 16 and 6 times higher than those in offices and PME. The distribution of FRs in different size fractions was Kow-dependent and affected by surface area (Log Kow=1-4), total organic carbon (Log Kow=4-9), and FR migration pathways into dust (Log Kow>9). Bioaccessibility of FRs was measured by the physiologically-based extraction test, with OPFR bioaccessibility being 1.8-82% while bioaccessible PBDEs, NBFRs, and DPs were under detection limits due to their high hydrophobicity. The OPFR bioaccessibility in 200-2000µm fraction was significantly higher than that of <43µm fraction, but with no difference among the other four fractions. Risk assessment was performed for the most abundant OPFR-tris(2-chloroethyl) phosphate. The average daily dose (ADD) values were the highest for the <43µm fraction for all three types of dust using total concentrations, but no consistent trend was found among the three types of dust if based on bioaccessible concentrations. Our results indicated that dust size impacted human exposure estimation of FRs due to their variability in distribution and bioaccessibility among different fractions. For future risk assessment, size selection for dust sampling should be standardized and bioaccessibility of FRs should not be overlooked. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. On the null distribution of Bayes factors in linear regression

    USDA-ARS?s Scientific Manuscript database

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  14. Species-abundance distribution patterns of soil fungi: contribution to the ecological understanding of their response to experimental fire in Mediterranean maquis (southern Italy).

    PubMed

    Persiani, Anna Maria; Maggi, Oriana

    2013-01-01

    Experimental fires, of both low and high intensity, were lit during summer 2000 and the following 2 y in the Castel Volturno Nature Reserve, southern Italy. Soil samples were collected Jul 2000-Jul 2002 to analyze the soil fungal community dynamics. Species abundance distribution patterns (geometric, logarithmic, log normal, broken-stick) were compared. We plotted datasets with information both on species richness and abundance for total, xerotolerant and heat-stimulated soil microfungi. The xerotolerant fungi conformed to a broken-stick model for both the low- and high intensity fires at 7 and 84 d after the fire; their distribution subsequently followed logarithmic models in the 2 y following the fire. The distribution of the heat-stimulated fungi changed from broken-stick to logarithmic models and eventually to a log-normal model during the post-fire recovery. Xerotolerant and, to a far greater extent, heat-stimulated soil fungi acquire an important functional role following soil water stress and/or fire disturbance; these disturbances let them occupy unsaturated habitats and become increasingly abundant over time.

  15. Multiplicative processes in visual cognition

    NASA Astrophysics Data System (ADS)

    Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.

    2014-03-01

    The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.

  16. Kinetic energy distribution of multiply charged ions in Coulomb explosion of Xe clusters.

    PubMed

    Heidenreich, Andreas; Jortner, Joshua

    2011-02-21

    We report on the calculations of kinetic energy distribution (KED) functions of multiply charged, high-energy ions in Coulomb explosion (CE) of an assembly of elemental Xe(n) clusters (average size (n) = 200-2171) driven by ultra-intense, near-infrared, Gaussian laser fields (peak intensities 10(15) - 4 × 10(16) W cm(-2), pulse lengths 65-230 fs). In this cluster size and pulse parameter domain, outer ionization is incomplete∕vertical, incomplete∕nonvertical, or complete∕nonvertical, with CE occurring in the presence of nanoplasma electrons. The KEDs were obtained from double averaging of single-trajectory molecular dynamics simulation ion kinetic energies. The KEDs were doubly averaged over a log-normal cluster size distribution and over the laser intensity distribution of a spatial Gaussian beam, which constitutes either a two-dimensional (2D) or a three-dimensional (3D) profile, with the 3D profile (when the cluster beam radius is larger than the Rayleigh length) usually being experimentally realized. The general features of the doubly averaged KEDs manifest the smearing out of the structure corresponding to the distribution of ion charges, a marked increase of the KEDs at very low energies due to the contribution from the persistent nanoplasma, a distortion of the KEDs and of the average energies toward lower energy values, and the appearance of long low-intensity high-energy tails caused by the admixture of contributions from large clusters by size averaging. The doubly averaged simulation results account reasonably well (within 30%) for the experimental data for the cluster-size dependence of the CE energetics and for its dependence on the laser pulse parameters, as well as for the anisotropy in the angular distribution of the energies of the Xe(q+) ions. Possible applications of this computational study include a control of the ion kinetic energies by the choice of the laser intensity profile (2D∕3D) in the laser-cluster interaction volume.

  17. Effect of stimulus configuration on crowding in strabismic amblyopia.

    PubMed

    Norgett, Yvonne; Siderov, John

    2017-11-01

    Foveal vision in strabismic amblyopia can show increased levels of crowding, akin to typical peripheral vision. Target-flanker similarity and visual-acuity test configuration may cause the magnitude of crowding to vary in strabismic amblyopia. We used custom-designed visual acuity tests to investigate crowding in observers with strabismic amblyopia. LogMAR was measured monocularly in both eyes of 11 adults with strabismic or mixed strabismic/anisometropic amblyopia using custom-designed letter tests. The tests used single-letter and linear formats with either bar or letter flankers to introduce crowding. Tests were presented monocularly on a high-resolution display at a test distance of 4 m, using standardized instructions. For each condition, five letters of each size were shown; testing continued until three letters of a given size were named incorrectly. Uncrowded logMAR was subtracted from logMAR in each of the crowded tests to highlight the crowding effect. Repeated-measures ANOVA showed that letter flankers and linear presentation individually resulted in poorer performance in the amblyopic eyes (respectively, mean normalized logMAR = 0.29, SE = 0.07, mean normalized logMAR = 0.27, SE = 0.07; p < 0.05) and together had an additive effect (mean = 0.42, SE = 0.09, p < 0.001). There was no difference across the tests in the fellow eyes (p > 0.05). Both linear presentation and letter rather than bar flankers increase crowding in the amblyopic eyes of people with strabismic amblyopia. These results suggest the influence of more than one mechanism contributing to crowding in linear visual-acuity charts with letter flankers.

  18. Algae Tile Data: 2004-2007, BPA-51; Preliminary Report, October 28, 2008.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holderman, Charles

    Multiple files containing 2004 through 2007 Tile Chlorophyll data for the Kootenai River sites designated as: KR1, KR2, KR3, KR4 (Downriver) and KR6, KR7, KR9, KR9.1, KR10, KR11, KR12, KR13, KR14 (Upriver) were received by SCS. For a complete description of the sites covered, please refer to http://ktoi.scsnetw.com. To maintain consistency with the previous SCS algae reports, all analyses were carried out separately for the Upriver and Downriver categories, as defined in the aforementioned paragraph. The Upriver designation, however, now includes three additional sites, KR11, KR12, and the nutrient addition site, KR9.1. Summary statistics and information on the four responses,more » chlorophyll a, chlorophyll a Accrual Rate, Total Chlorophyll, and Total Chlorophyll Accrual Rate are presented in Print Out 2. Computations were carried out separately for each river position (Upriver and Downriver) and year. For example, the Downriver position in 2004 showed an average Chlorophyll a level of 25.5 mg with a standard deviation of 21.4 and minimum and maximum values of 3.1 and 196 mg, respectively. The Upriver data in 2004 showed a lower overall average chlorophyll a level at 2.23 mg with a lower standard deviation (3.6) and minimum and maximum values of (0.13 and 28.7, respectively). A more comprehensive summary of each variable and position is given in Print Out 3. This lists the information above as well as other summary information such as the variance, standard error, various percentiles and extreme values. Using the 2004 Downriver Chlorophyll a as an example again, the variance of this data was 459.3 and the standard error of the mean was 1.55. The median value or 50th percentile was 21.3, meaning 50% of the data fell above and below this value. It should be noted that this value is somewhat different than the mean of 25.5. This is an indication that the frequency distribution of the data is not symmetrical (skewed). The skewness statistic, listed as part of the first section of each analysis, quantifies this. In a symmetric distribution, such as a Normal distribution, the skewness value would be 0. The tile chlorophyll data, however, shows larger values. Chlorophyll a, in the 2004 Downriver example, has a skewness statistic of 3.54, which is quite high. In the last section of the summary analysis, the stem and leaf plot graphically demonstrates the asymmetry, showing most of the data centered around 25 with a large value at 196. The final plot is referred to as a normal probability plot and graphically compares the data to a theoretical normal distribution. For chlorophyll a, the data (asterisks) deviate substantially from the theoretical normal distribution (diagonal reference line of pluses), indicating that the data is non-normal. Other response variables in both the Downriver and Upriver categories also indicated skewed distributions. Because the sample size and mean comparison procedures below require symmetrical, normally distributed data, each response in the data set was logarithmically transformed. The logarithmic transformation, in this case, can help mitigate skewness problems. The summary statistics for the four transformed responses (log-ChlorA, log-TotChlor, and log-accrual ) are given in Print Out 4. For the 2004 Downriver Chlorophyll a data, the logarithmic transformation reduced the skewness value to -0.36 and produced a more bell-shaped symmetric frequency distribution. Similar improvements are shown for the remaining variables and river categories. Hence, all subsequent analyses given below are based on logarithmic transformations of the original responses.« less

  19. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    PubMed

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  20. Galaxy evolution by color-log(n) type since redshift unity in the Hubble Ultra Deep Field

    NASA Astrophysics Data System (ADS)

    Cameron, E.; Driver, S. P.

    2009-01-01

    Aims: We explore the use of the color-log(n) (where n is the global Sérsic index) plane as a tool for subdividing the galaxy population in a physically-motivated manner out to redshift unity. We thereby aim to quantify surface brightness evolution by color-log(n) type, accounting separately for the specific selection and measurement biases against each. Methods: We construct (u-r) color-log(n) diagrams for distant galaxies in the Hubble Ultra Deep Field (UDF) within a series of volume-limited samples to z=1.5. The color-log(n) distributions of these high redshift galaxies are compared against that measured for nearby galaxies in the Millennium Galaxy Catalogue (MGC), as well as to the results of visual morphological classification. Based on this analysis we divide our sample into three color-structure classes. Namely, “red, compact”, “blue, diffuse” and “blue, compact”. Luminosity-size diagrams are constructed for members of the two largest classes (“red, compact” and “blue, diffuse”), both in the UDF and the MGC. Artificial galaxy simulations (for systems with exponential and de Vaucouleurs profile shapes alternately) are used to identify “bias-free” regions of the luminosity-size plane in which galaxies are detected with high completeness, and their fluxes and sizes recovered with minimal surface brightness-dependent biases. Galaxy evolution is quantified via comparison of the low and high redshift luminosity-size relations within these “bias-free” regions. Results: We confirm the correlation between color-log(n) plane position and visual morphological type observed locally and in other high redshift studies in the color and/or structure domain. The combined effects of observational uncertainties, the morphological K-correction and cosmic variance preclude a robust statistical comparison of the shape of the MGC and UDF color-log(n) distributions. However, in the interval 0.75 < z <1.0 where the UDF i-band samples close to rest-frame B-band light (i.e., the morphological K-correction between our samples is negligible) we are able to present tentative evidence of bimodality, albiet for a very small sample size (17 galaxies). Our unique approach to quantifying selection and measurement biases in the luminosity-size plane highlights the need to consider errors in the recovery of both magnitudes and sizes, and their dependence on profile shape. Motivated by these results we divide our sample into the three color-structure classes mentioned above and quantify luminosity-size evolution by galaxy type. Specifically, we detect decreases in B-band, surface brightness of 1.57 ± 0.22 mag arcsec-2 and 1.65 ± 0.22 mag arcsec-2 for our “blue, diffuse” and “red, compact” classes respectively between redshift unity and the present day.

  1. Reflectance of micron-sized dust particles retrieved with the Umov law

    NASA Astrophysics Data System (ADS)

    Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy

    2017-03-01

    The maximum positive polarization Pmax that initially unpolarized light acquires when scattered from a particulate surface inversely correlates with its geometric albedo A. In the literature, this phenomenon is known as the Umov law. We investigate the Umov law in application to single-scattering submicron and micron-sized agglomerated debris particles, model particles that have highly irregular morphology. We find that if the complex refractive index m is constrained to Re(m)=1.4-1.7 and Im(m)=0-0.15, model particles of a given size distribution have a linear inverse correlation between log(Pmax) and log(A). This correlation resembles what is measured in particulate surfaces, suggesting a similar mechanism governing the Umov law in both systems. We parameterize the dependence of log(A) on log(Pmax) of single-scattering particles and analyze the airborne polarimetric measurements of atmospheric aerosols reported by Dolgos & Martins in [1]. We conclude that Pmax ≈ 50% measured by Dolgos & Martins corresponds to very dark aerosols having geometric albedo A=0.019 ± 0.005.

  2. Including operational data in QMRA model: development and impact of model inputs.

    PubMed

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  3. Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu

    2017-05-01

    The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.

  4. The Fractal Behavior of Crystal Distribution of la Gloria Pluton, Chile

    NASA Astrophysics Data System (ADS)

    Gutiérrez, F. J.; Payacán, I. J.; Pasten, D.; Aravena, A.; Gelman, S. E.; Bachmann, O.; Parada, M. A.

    2013-12-01

    We utilize fractal analysis to study the spatial distributions of crystals in a 10 Ma granitic intrusion (La Gloria pluton) located in the central Chilean Andes. Previous work determined the crystal size distribution (CSD) and anisotropy of magnetic susceptibility (AMS) tensors throughout this pluton. Using orthogonal thin sections oriented along the AMS tensor axes, we have applied fractal analysis in three magmatic crystal families: plagioclase, ferromagnesian minerals (biotite and amphibole), and Fe-Ti oxides (magnetite with minor ilmenite). We find that plagioclase and ferromagnesian minerals have a Semi-logarithmic CSD (S-CSD), given by: log(n/n0)= -L/C (1) where n [mm-4], n0 [mm-4], L [mm] and C [mm] are crystal density, intercept (nucleation density; L=0), size of crystals (three axes) and characteristic length, respectively. In contrast, Fe-Ti oxides have a Fractal CSD (F-CSD, power law size distribution), given by: log(n)= - Dn log(L) + n1 (2) where Dn and n1 [log(mm-4)] are a non-dimensional proportionality constant and the logarithm of the initial crystallization density (n1 = log(n(L=1 mm))), respectively. Finally, we calculate the fractal dimension (D0) by applying the box-counting method on each crystal thin section image, using: log(N) = -D0 log(ɛ) (3) where N and ɛ are the number of boxes occupied by minerals and the length of the square box, respectively. Results indicate that D0 values (eq. 3) are well defined for all minerals, and are higher for plagioclase than for ferromagnesian minerals and lowest for Fe-Ti oxides. D0 values are correlated with n0 and -1/C for S-CSD (eq. 1), and with n1 values for F-CSD (eq. 2). These correlations between fractal dimensions with CSD parameters suggest crystal growth follows a fractal behaviour in magmatic systems. Fractal behaviour of CSD means that the spatial distribution of crystals follows an all-scale pattern as part of a self-organized magmatic system. We interpret S-CSD of plagioclase and ferromagnesian minerals as consequence of early to intermediate crystal growth, whereas F-CSD of magnetite is also a consequence of late magmatic equilibration by increasing of fine magnetite crystals (e.g. reaction of hornblende to magnetite plus actinolite, biotite and titanite). Acknowledgments. This research has been developed by the FONDECYT N°11100241 and PBCT-PDA07 projects granted by CONICYT (Chilean National Commission for Science and Technology). I.P. is supported by CONICYT magister grant N°22130729. F.G. and I.P. thank to FONDAP N°15090013 for supporting during the conference. D.P. acknowledges FONDECYT grant N° 3120237.

  5. Performance of statistical models to predict mental health and substance abuse cost.

    PubMed

    Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K

    2006-10-26

    Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

  6. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  7. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  8. Assessment of variations in thermal cycle life data of thermal barrier coated rods

    NASA Astrophysics Data System (ADS)

    Hendricks, R. C.; McDonald, G.

    An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.

  9. Assessment of variations in thermal cycle life data of thermal barrier coated rods

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Mcdonald, G.

    1981-01-01

    An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.

  10. Testing models of parental investment strategy and offspring size in ants.

    PubMed

    Gilboa, Smadar; Nonacs, Peter

    2006-01-01

    Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.

  11. Income distribution dependence of poverty measure: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Amit K.; Mallick, Sushanta K.

    2007-04-01

    Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.

  12. Universal scaling of grain size distributions during dislocation creep

    NASA Astrophysics Data System (ADS)

    Aupart, Claire; Dunkel, Kristina G.; Angheluta, Luiza; Austrheim, Håkon; Ildefonse, Benoît; Malthe-Sørenssen, Anders; Jamtveit, Bjørn

    2017-04-01

    Grain size distributions are major sources of information about the mechanisms involved in ductile deformation processes and are often used as paleopiezometers (stress gauges). Several factors have been claimed to influence the stress vs grain size relation, including the water content (Jung & Karato 2001), the temperature (De Bresser et al., 2001), the crystal orientation (Linckens et al., 2016), the presence of second phase particles (Doherty et al. 1997; Cross et al., 2015), and heterogeneous stress distributions (Platt & Behr 2011). However, most of the studies of paleopiezometers have been done in the laboratory under conditions different from those in natural systems. It is therefore essential to complement these studies with observations of naturally deformed rocks. We have measured olivine grain sizes in ultramafic rocks from the Leka ophiolite in Norway and from Alpine Corsica using electron backscatter diffraction (EBSD) data, and calculated the corresponding probability density functions. We compared our results with samples from other studies and localities that have formed under a wide range of stress and strain rate conditions. All distributions collapse onto one universal curve in a log-log diagram where grain sizes are normalized by the mean grain size of each sample. The curve is composed of two straight segments with distinct slopes for grains above and below the mean grain size. These observations indicate that a surprisingly simple and universal power-law scaling describes the grain size distribution in ultramafic rocks during dislocation creep irrespective of stress levels and strain rates. Cross, Andrew J., Susan Ellis, and David J. Prior. 2015. « A Phenomenological Numerical Approach for Investigating Grain Size Evolution in Ductiley Deforming Rocks ». Journal of Structural Geology 76 (juillet): 22-34. doi:10.1016/j.jsg.2015.04.001. De Bresser, J. H. P., J. H. Ter Heege, and C. J. Spiers. 2001. « Grain Size Reduction by Dynamic Recrystallization: Can It Result in Major Theological Weakening? » International Journal of Earth Sciences 90 (1): 28-45. Doherty, R. D., D. A. Hughes, F. J. Humphreys, J. J. Jonas, D. J. Jensen, M. E. Kassner, W. E. King, T. R. McNelley, H. J. McQueen, and A. D. Rollett. 1997. « Current Issues in Recrystallization: A Review ». Materials Science and Engineering a-Structural Materials Properties Microstructure and Processing 238 (2): 219-74. doi:10.1016/S0921-5093(97)00424-3. Jung, H., and S. I. Karato. 2001. « Effects of Water on Dynamically Recrystallized Grain-Size of Olivine ». Journal of Structural Geology 23 (9): 1337-44. doi:10.1016/S0191-8141(01)00005-0. Linckens, J., G. Zulauf, and J. Hammer. 2016. « Experimental Deformation of Coarse-Grained Rock Salt to High Strain ». Journal of Geophysical Research-Solid Earth 121 (8): 6150-71. doi:10.1002/2016JB012890. Platt, J.P., and W.M. Behr. 2011. « Grainsize Evolution in Ductile Shear Zones: Implications for Strain Localization and the Strength of the Lithosphere ». Journal of Structural Geology 33 (4): 537-50. doi:10.1016/j.jsg.2011.01.018.

  13. The plasma parameter log (TG/HDL-C) as an atherogenic index: correlation with lipoprotein particle size and esterification rate in apoB-lipoprotein-depleted plasma (FER(HDL)).

    PubMed

    Dobiásová, M; Frohlich, J

    2001-10-01

    To evaluate if logarithm of the ratio of plasma concentration of triglycerides to HDL-cholesterol (Log[TG/HDL-C]) correlates with cholesterol esterification rates in apoB-lipoprotein-depleted plasma (FER(HDL)) and lipoprotein particle size. We analyzed previous data dealing with the parameters related to the FER(HDL) (an indirect measure of lipoprotein particle size). In a total of 1433 subjects from 35 cohorts with various risk of atherosclerosis (cord plasma, children, healthy men and women, pre- and postmenopausal women, patients with hypertension, type 2 diabetes, dyslipidemia and patients with positive or negative angiography findings) were studied. The analysis revealed a strong positive correlation (r = 0.803) between FER(HDL) and Log(TG/HDL-C). This parameter, which we propose to call "atherogenic index of plasma" (AIP) directly related to the risk of atherosclerosis in the above cohorts. We also confirmed in a cohort of 35 normal subjects a significant inverse correlation of LDL size with FER(HDL) (r = -0.818) and AIP (r = -0.776). Values of AIP correspond closely to those of FER(HDL) and to lipoprotein particle size and thus could be used as a marker of plasma atherogenicity.

  14. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  15. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizesmore » the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)« less

  17. Aircraft Particle Emissions eXperiment (APEX)

    NASA Technical Reports Server (NTRS)

    Wey, C. C.; Anderson, B. E.; Hudgins, C.; Wey, C.; Li-Jones, X.; Winstead, E.; Thornhill, L. K.; Lobo, P.; Hagen, D.; Whitefield, P.

    2006-01-01

    APEX systematically investigated the gas-phase and particle emissions from a CFM56-2C1 engine on NASA's DC-8 aircraft as functions of engine power, fuel composition, and exhaust plumage. Emissions parameters were measured at 11 engine power, settings, ranging from idle to maximum thrust, in samples collected at 1, 10, and 30 m downstream of the exhaust plane as the aircraft burned three fuels to stress relevant chemistry. Gas-phase emission indices measured at 1 m were in good agreement with the ICAO data and predictions provided by GEAE empirical modeling tools. Soot particles emitted by the engine exhibited a log-normal size distribution peaked between 15 and 40 nm, depending on engine power. Samples collected 30 m downstream of the engine exhaust plane exhibited a prominent nucleation mode.

  18. Analysis of aperture averaging measurements. [laser scintillation data on the effect of atmospheric turbulence on signal fluctuations

    NASA Technical Reports Server (NTRS)

    Fried, D. L.

    1975-01-01

    Laser scintillation data obtained by the NASA Goddard Space Flight Center balloon flight no. 5 from White Sands Missile Range on 19 October 1973 are analyzed. The measurement data, taken with various size receiver apertures, were related to predictions of aperture averaging theory, and it is concluded that the data are in reasonable agreement with theory. The following parameters are assigned to the vertical distribution of the strength of turbulence during the period of the measurements (daytime), for lambda = 0.633 microns, and the source at the zenith; the aperture averaging length is d sub o = 0.125 m, and the log-amplitude variance is (beta sub l)2 = 0.084 square nepers. This corresponds to a normalized point intensity variance of 0.40.

  19. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  20. Physical properties of the ionized gas and brightness distribution in NGC4736

    NASA Astrophysics Data System (ADS)

    Rodrigues, I.; Dottori, H.; Cepa, J.; Vilchez, J.

    1998-03-01

    In this work we study the galaxy NGC4736, using narrow band interference filters imaging centered at the emission lines {Oii} {3727+3729}, Hβ, {Oiii} {5007}, Hα, {Sii} {6716+6730} and {Siii} {9070} and nearby continua. We have obtained sizes, positions, emission line absolute fluxes, and continua intensities for 90 Hii regions, mainly distributed in a ring-like structure of 3.2kpc in diameter. The Hα luminosities are in the range 37.3 <= log L_Hα <= 39.4 ergs(-1) . The Hii regions size distribution presents a characteristic diameter D_0 = 115pc and verifies the relation log (L_Hα ) ~D(3) . The temperature of the ionizing sources and the metallicity of the Hii regions are respectively in the ranges 3.410(4) <~T_⋆ <~ 4.010(4) K and 8.5 <~12 + log (O/H) <~9.3. The masses of the ionizing clusters are in the range 510(3) <~M_T/M_sun <~210(5) . The continua radial surface brightness distribution is better fitted by the superposition of a de Vaucouleurs', a thin and a thick exponential disk laws. The monochromatic colors show that outside the star forming ring the disk presents a younger stellar population than inside it. Tables 3 and 4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html

  1. Probabilistic structural analysis of a truss typical for space station

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.

    1990-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.

  2. Estimation of optical properties of aerosols and bidirectional reflectance from PARASOL/POLDER data over land

    NASA Astrophysics Data System (ADS)

    Kusaka, Takashi; Miyazaki, Go

    2014-10-01

    When monitoring target areas covered with vegetation from a satellite, it is very useful to estimate the vegetation index using the surface anisotropic reflectance, which is dependent on both solar and viewing geometries, from satellite data. In this study, the algorithm for estimating optical properties of atmospheric aerosols such as the optical thickness (τ), the refractive index (Nr), the mixing ratio of small particles in the bimodal log-normal distribution function (C) and the bidirectional reflectance (R) from only the radiance and polarization at the 865nm channel received by the PARASOL/POLDER is described. Parameters of the bimodal log-normal distribution function: mean radius, r1, standard deviation, σ1, of fine aerosols, and r2, σ2 of coarse aerosols were fixed, and these values were estimated from monthly averaged size distribution at AERONET sites managed by NASA near the target area. Moreover, it is assumed that the contribution of the surface reflectance with directional anisotropy to the polarized radiance received by the satellite is small because it is shown from our ground-based polarization measurements of light ray reflected by the grassland that degrees of polarization of the reflected light by the grassland are very low values at the 865nm channel. First aerosol properties were estimated from only the polarized radiance and then the bidirectional reflectance given by the Ross-Li BRDF model was estimated from only the total radiance at target areas in PARASOL/POLDER data over the Japanese islands taken on April 28, 2012 and April 25, 2010. The estimated optical thickness of aerosols was checked with those given in AERONET sites and the estimated parameters of BRDF were compared with those of vegetation measured from the radio-controlled helicopter. Consequently, it is shown that the algorithm described in the present study provides reasonable values for aerosol properties and surface bidirectional reflectance.

  3. Probabilistic properties of wavelets in kinetic surface roughening

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.

    2001-08-01

    Using the data of a recent numerical simulation [M. Ahr and M. Biehl, Phys. Rev. E 62, 1773 (2000)] of homoepitaxial growth it is shown that the observed probability distribution of a wavelet based measure of the growing surface roughness is consistent with a stretched log-normal distribution and the corresponding branching dimension depends on the level of particle desorption.

  4. Spatial Correlation of Rain Drop Size Distribution from Polarimetric Radar and 2D-Video Disdrometers

    NASA Technical Reports Server (NTRS)

    Thurai, Merhala; Bringi, Viswanathan; Gatlin, Patrick N.; Wingo, Matt; Petersen, Walter Arthur; Carey, Lawrence D.

    2011-01-01

    Spatial correlations of two of the main rain drop-size distribution (DSD) parameters - namely the median-volume diameter (Do) and the normalized intercept parameter (Nw) - as well as rainfall rate (R) are determined from polarimetric radar measurements, with added information from 2D video disdrometer (2DVD) data. Two cases have been considered, (i) a widespread, long-duration rain event in Huntsville, Alabama, and (ii) an event with localized intense rain-cells within a convection line which occurred during the MC3E campaign. For the first case, data from a C-band polarimetric radar (ARMOR) were utilized, with two 2DVDs acting as ground-truth , both being located at the same site 15 km from the radar. The radar was operated in a special near-dwelling mode over the 2DVDs. In the second case, data from an S-band polarimetric radar (NPOL) data were utilized, with at least five 2DVDs located between 20 and 30 km from the radar. In both rain event cases, comparisons of Do, log10(Nw) and R were made between radar derived estimates and 2DVD-based measurements, and were found to be in good agreement, and in both cases, the radar data were subsequently used to determine the spatial correlations For the first case, the spatial decorrelation distance was found to be smallest for R (4.5 km), and largest fo Do (8.2 km). For log10(Nw) it was 7.2 km (Fig. 1). For the second case, the corresponding decorrelation distances were somewhat smaller but had a directional dependence. In Fig. 2, we show an example of Do comparisons between NPOL based estimates and 1-minute DSD based estimates from one of the five 2DVDs.

  5. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  6. Size exclusion deep bed filtration: Experimental and modelling uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badalyan, Alexander, E-mail: alexander.badalyan@adelaide.edu.au; You, Zhenjiang; Aji, Kaiser

    A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspendedmore » particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.« less

  7. Golf-course and funnel energy landscapes: Protein folding concepts in martensites

    NASA Astrophysics Data System (ADS)

    Shankaraiah, N.

    2017-06-01

    We use protein folding energy landscape concepts such as golf course and funnel to study re-equilibration in athermal martensites under systematic temperature quench Monte Carlo simulations. On quenching below a transition temperature, the seeded high-symmetry parent-phase austenite that converts to the low-symmetry product-phase martensite, through autocatalytic twinning or elastic photocopying, has both rapid conversions and incubation delays in the temperature-time-transformation phase diagram. We find the rapid (incubation delays) conversions at low (high) temperatures arises from the presence of large (small) size of golf-course edge that has the funnel inside for negative energy states. In the incubating state, the strain structure factor enters into the Brillouin-zone golf course through searches for finite transitional pathways which close off at the transition temperature with Vogel-Fulcher divergences that are insensitive to Hamiltonian energy scales and log-normal distributions, as signatures of dominant entropy barriers. The crossing of the entropy barrier is identified through energy occupancy distributions, Monte Carlo acceptance fractions, heat emission, and internal work.

  8. Genetic Engineering of Optical Properties of Biomaterials

    NASA Astrophysics Data System (ADS)

    Gourley, Paul; Naviaux, Robert; Yaffe, Michael

    2008-03-01

    Baker's yeast cells are easily cultured and can be manipulated genetically to produce large numbers of bioparticles (cells and mitochondria) with controllable size and optical properties. We have recently employed nanolaser spectroscopy to study the refractive index of individual cells and isolated mitochondria from two mutant strains. Results show that biomolecular changes induced by mutation can produce bioparticles with radical changes in refractive index. Wild-type mitochondria exhibit a distribution with a well-defined mean and small variance. In striking contrast, mitochondria from one mutant strain produced a histogram that is highly collapsed with a ten-fold decrease in the mean and standard deviation. In a second mutant strain we observed an opposite effect with the mean nearly unchanged but the variance increased nearly a thousand-fold. Both histograms could be self-consistently modeled with a single, log-normal distribution. The strains were further examined by 2-dimensional gel electrophoresis to measure changes in protein composition. All of these data show that genetic manipulation of cells represents a new approach to engineering optical properties of bioparticles.

  9. Polymorphic mountain whitefish (Prosopium williamsoni) in a coastal riverscape: size class assemblages, distribution, and habitat associations

    USGS Publications Warehouse

    Starr, James C.; Torgersen, Christian E.

    2015-01-01

    We compared the assemblage structure, spatial distributions, and habitat associations of mountain whitefish (Prosopium williamsoni) morphotypes and size classes. We hypothesised that morphotypes would have different spatial distributions and would be associated with different habitat features based on feeding behaviour and diet. Spatially continuous sampling was conducted over a broad extent (29 km) in the Calawah River, WA (USA). Whitefish were enumerated via snorkelling in three size classes: small (10–29 cm), medium (30–49 cm), and large (≥50 cm). We identified morphotypes based on head and snout morphology: a pinocchio form that had an elongated snout and a normal form with a blunted snout. Large size classes of both morphotypes were distributed downstream of small and medium size classes, and normal whitefish were distributed downstream of pinocchio whitefish. Ordination of whitefish assemblages with nonmetric multidimensional scaling revealed that normal whitefish size classes were associated with higher gradient and depth, whereas pinocchio whitefish size classes were positively associated with pool area, distance upstream, and depth. Reach-scale generalised additive models indicated that normal whitefish relative density was associated with larger substrate size in downstream reaches (R2 = 0.64), and pinocchio whitefish were associated with greater stream depth in the reaches farther upstream (R2 = 0.87). These results suggest broad-scale spatial segregation (1–10 km), particularly between larger and more phenotypically extreme individuals. These results provide the first perspective on spatial distributions and habitat relationships of polymorphic mountain whitefish.

  10. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  11. Novel bayes factors that capture expert uncertainty in prior density specification in genetic association studies.

    PubMed

    Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin

    2015-05-01

    Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  12. Predicting clicks of PubMed articles.

    PubMed

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.

  13. Predicting clicks of PubMed articles

    PubMed Central

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386

  14. NMR Methods, Applications and Trends for Groundwater Evaluation and Management

    NASA Astrophysics Data System (ADS)

    Walsh, D. O.; Grunewald, E. D.

    2011-12-01

    Nuclear magnetic resonance (NMR) measurements have a tremendous potential for improving groundwater characterization, as they provide direct detection and measurement of groundwater and unique information about pore-scale properties. NMR measurements, commonly used in chemistry and medicine, are utilized in geophysical investigations through non-invasive surface NMR (SNMR) or downhole NMR logging measurements. Our recent and ongoing research has focused on improving the performance and interpretation of NMR field measurements for groundwater characterization. Engineering advancements have addressed several key technical challenges associated with SNMR measurements. Susceptibility of SNMR measurements to environmental noise has been dramatically reduced through the development of multi-channel acquisition hardware and noise-cancellation software. Multi-channel instrumentation (up to 12 channels) has also enabled more efficient 2D and 3D imaging. Previous limitations in measuring NMR signals from water in silt, clay and magnetic geology have been addressed by shortening the instrument dead-time from 40 ms to 4 ms, and increasing the power output. Improved pulse sequences have been developed to more accurately estimate NMR relaxation times and their distributions, which are sensitive to pore size distributions. Cumulatively, these advancements have vastly expanded the range of environments in which SNMR measurements can be obtained, enabling detection of groundwater in smaller pores, in magnetic geology, in the unsaturated zone, and nearby to infrastructure (presented here in case studies). NMR logging can provide high-resolution estimates of bound and mobile water content and pore size distributions. While NMR logging has been utilized in oil and gas applications for decades, its use in groundwater investigations has been limited by the large size and high cost of oilfield NMR logging tools and services. Recently, engineering efforts funded by the US Department of Energy have produced an NMR logging tool that is much smaller and less costly than comparable oilfield NMR logging tools. This system is specifically designed for near surface groundwater investigations, incorporates small diameter probes (as small as 1.67 inches diameter) and man-portable surface stations, and provides NMR data and information content on par with oilfield NMR logging tools. A direct-push variant of this logging tool has also been developed. Key challenges associated with small diameter tools include inherently lower SNR and logging speeds, the desire to extend the sensitive zone as far as possible into unconsolidated formations, and simultaneously maintaining high power and signal fidelity. Our ongoing research in groundwater NMR aims to integrating surface and borehole measurements for regional-scale permeability mapping, and to develop in-place NMR sensors for long term monitoring of contaminant and remediation processes. In addition to groundwater resource characterization, promising new applications of NMR include assessing water content in ice and permafrost, management of groundwater in mining operations, and evaluation and management of groundwater in civil engineering applications.

  15. A new look at the Lake Superior biomass size spectrum

    USGS Publications Warehouse

    Yurista, Peder M.; Yule, Daniel L.; Balge, Matt; VanAlstine, Jon D.; Thompson, Jo A.; Gamble, Allison E.; Hrabik, Thomas R.; Kelly, John R.; Stockwell, Jason D.; Vinson, Mark

    2014-01-01

    We synthesized data from multiple sampling programs and years to describe the Lake Superior pelagic biomass size structure. Data consisted of Coulter counts for phytoplankton, optical plankton counts for zooplankton, and acoustic surveys for pelagic prey fish. The size spectrum was stable across two time periods separated by 5 years. The primary scaling or overall slope of the normalized biomass size spectra for the combined years was −1.113, consistent with a previous estimate for Lake Superior (−1.10). Periodic dome structures within the overall biomass size structure were fit to polynomial regressions based on the observed sub-domes within the classical taxonomic positions (algae, zooplankton, and fish). This interpretation of periodic dome delineation was aligned more closely with predator–prey size relationships that exist within the zooplankton (herbivorous, predacious) and fish (planktivorous, piscivorous) taxonomic positions. Domes were spaced approximately every 3.78 log10 units along the axis and with a decreasing peak magnitude of −4.1 log10 units. The relative position of the algal and herbivorous zooplankton domes predicted well the subsequent biomass domes for larger predatory zooplankton and planktivorous prey fish.

  16. Spatial arrangement and size distribution of normal faults, Buckskin detachment upper plate, Western Arizona

    NASA Astrophysics Data System (ADS)

    Laubach, S. E.; Hundley, T. H.; Hooker, J. N.; Marrett, R. A.

    2018-03-01

    Fault arrays typically include a wide range of fault sizes and those faults may be randomly located, clustered together, or regularly or periodically located in a rock volume. Here, we investigate size distribution and spatial arrangement of normal faults using rigorous size-scaling methods and normalized correlation count (NCC). Outcrop data from Miocene sedimentary rocks in the immediate upper plate of the regional Buckskin detachment-low angle normal-fault, have differing patterns of spatial arrangement as a function of displacement (offset). Using lower size-thresholds of 1, 0.1, 0.01, and 0.001 m, displacements range over 5 orders of magnitude and have power-law frequency distributions spanning ∼ four orders of magnitude from less than 0.001 m to more than 100 m, with exponents of -0.6 and -0.9. The largest faults with >1 m displacement have a shallower size-distribution slope and regular spacing of about 20 m. In contrast, smaller faults have steep size-distribution slopes and irregular spacing, with NCC plateau patterns indicating imposed clustering. Cluster widths are 15 m for the 0.1-m threshold, 14 m for 0.01-m, and 1 m for 0.001-m displacement threshold faults. Results demonstrate normalized correlation count effectively characterizes the spatial arrangement patterns of these faults. Our example from a high-strain fault pattern above a detachment is compatible with size and spatial organization that was influenced primarily by boundary conditions such as fault shape, mechanical unit thickness and internal stratigraphy on a range of scales rather than purely by interaction among faults during their propagation.

  17. Fatigue shifts and scatters heart rate variability in elite endurance athletes.

    PubMed

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.

  18. Evaluating the performance of the quick CSF method in detecting contrast sensitivity function changes

    PubMed Central

    Hou, Fang; Lesmes, Luis Andres; Kim, Woojae; Gu, Hairong; Pitt, Mark A.; Myung, Jay I.; Lu, Zhong-Lin

    2016-01-01

    The contrast sensitivity function (CSF) has shown promise as a functional vision endpoint for monitoring the changes in functional vision that accompany eye disease or its treatment. However, detecting CSF changes with precision and efficiency at both the individual and group levels is very challenging. By exploiting the Bayesian foundation of the quick CSF method (Lesmes, Lu, Baek, & Albright, 2010), we developed and evaluated metrics for detecting CSF changes at both the individual and group levels. A 10-letter identification task was used to assess the systematic changes in the CSF measured in three luminance conditions in 112 naïve normal observers. The data from the large sample allowed us to estimate the test–retest reliability of the quick CSF procedure and evaluate its performance in detecting CSF changes at both the individual and group levels. The test–retest reliability reached 0.974 with 50 trials. In 50 trials, the quick CSF method can detect a medium 0.30 log unit area under log CSF change with 94.0% accuracy at the individual observer level. At the group level, a power analysis based on the empirical distribution of CSF changes from the large sample showed that a very small area under log CSF change (0.025 log unit) could be detected by the quick CSF method with 112 observers and 50 trials. These results make it plausible to apply the method to monitor the progression of visual diseases or treatment effects on individual patients and greatly reduce the time, sample size, and costs in clinical trials at the group level. PMID:27120074

  19. Shallow conduit processes of the 1991 Hekla eruption, Iceland

    NASA Astrophysics Data System (ADS)

    Gudnason, J.; Thordarson, T.; Houghton, B. F.

    2013-12-01

    On January 17, 1991 at 17:00 hrs, the 17th eruption of Hekla since 1104AD began. Lasting for almost two months, it produced 0.02 km3 of icelandite tephra and ~0.15km3 of icelandite lava. This eruption was the third of four eruptions since 1980 with a recurrence period of approximately 10 years, as opposed to a recurrence interval of c. 55 years for the eruptions in the period 1104AD to 1947AD. [1] The last four Hekla eruptions are typified by a 0.5-2 hour-long initial phase of subplinian intensity and discharge ranging from 2900-6700 m3/s [2]. In all 4 events the inital phase was followed by a sustained and relatively low-discharge(<20 m3/s) effusive phase, which in the case of Hekla 1991 lasted until the 11th March 1991 [1]. The initial phase of the 1991 event lasted for ~50 minutes and sustained an eruption plume that rose to 11.5 km in about 10 minutes [1]. The plume was dispersed to the NNE at velocities of 60-70 km/hr producing a well-sorted tephra fall covering >20,000 km2. Here we examine the first phase of the Hekla 1991 eruption with focus on vesiculation and fragmentation processes in the shallow conduit and ash production. Samples of the tephra fall were collected on snow immediately after the initial phase at multiple sites providing a representative spatial coverage within the 0.1mm isopach [3]. This set was augmented by samples collected in 2012 to provide tighter coverage of near vent region. Grain size of all samples has been measured down to 1 micron. Density measurements have been conducted on 4 near-vent pumice samples (100 clasts each) and the pumice vesicle size distribution has been determined in a selected subset of clasts. The reconstructed whole deposit grain size distribution exhibits a unimodal, log-normal distribution peaking at -3 phi, typical of dry, magmatic fragmentation. Pumice densities range from 520-880 kg/m3 and exhibit a tight unimodal and log-normal distribution indicating a mean vesicularity of 77% to 79% for the magma erupted during the initial phase. Along with preliminary results for bubble number density and vesicle size distribution this implies a single late-stage homogeneous bubble nucleation and very uniform conditions of magma fragmentation during this short-lived initial phase of the Hekla 1991 eruption. 1. Gudmundsson, A., et al., The 1991 eruption of Hekla, Iceland. Bulletin of Volcanology, 1992. 54(3): p. 238-246. 2. Höskuldsson, Á., Óskarsson, N., Pedersen, R., Grönvold, K., Vogfjörd, K. & Ólafsdóttir, R. 2007. The millennium eruption of Hekla in February 2000. Bull Volcanol, 70:169-182. 3. Larsen, G., E.G. Vilmundardóttir, and B. Thorkelsson, Heklugosid 1991: Gjóskufall og gjóskulagid frá fyrsta degi gossins. Náttúrufrædingurinn, 1992. 61(3-4): p. 159-176.

  20. Examining the influence of heterogeneous porosity fields on conservative solute transport

    USGS Publications Warehouse

    Hu, B.X.; Meerschaert, M.M.; Barrash, W.; Hyndman, D.W.; He, C.; Li, X.; Guo, Laodong

    2009-01-01

    It is widely recognized that groundwater flow and solute transport in natural media are largely controlled by heterogeneities. In the last three decades, many studies have examined the effects of heterogeneous hydraulic conductivity fields on flow and transport processes, but there has been much less attention to the influence of heterogeneous porosity fields. In this study, we use porosity and particle size measurements from boreholes at the Boise Hydrogeophysical Research Site (BHRS) to evaluate the importance of characterizing the spatial structure of porosity and grain size data for solute transport modeling. Then we develop synthetic hydraulic conductivity fields based on relatively simple measurements of porosity from borehole logs and grain size distributions from core samples to examine and compare the characteristics of tracer transport through these fields with and without inclusion of porosity heterogeneity. In particular, we develop horizontal 2D realizations based on data from one of the less heterogeneous units at the BHRS to examine effects where spatial variations in hydraulic parameters are not large. The results indicate that the distributions of porosity and the derived hydraulic conductivity in the study unit resemble fractal normal and lognormal fields respectively. We numerically simulate solute transport in stochastic fields and find that spatial variations in porosity have significant effects on the spread of an injected tracer plume including a significant delay in simulated tracer concentration histories.

  1. Radium-226 content of beverages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiefer, J.

    Radium contents of commercially obtained beer, wine, milk and mineral waters were measured. All distributions were log-normal with the following geometrical mean values: beer: 2.1 X 10(-2) Bq L-1; wine: 3.4 X 10(-2) Bq L-1; milk: 3 X 10(-3) Bq L-1; normal mineral water: 4.3 X 10(-2) L-1; medical mineral water: 9.4 X 10(-2) Bq L-1.

  2. Investigation into the performance of different models for predicting stutter.

    PubMed

    Bright, Jo-Anne; Curran, James M; Buckleton, John S

    2013-07-01

    In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. PERFLUORINATED COMPOUNDS IN ARCHIVED HOUSE-DUST SAMPLES

    EPA Science Inventory

    Archived house-dust samples were analyzed for 13 perfluorinated compounds (PFCs). Results show that PFCs are found in house-dust samples, and the data are log-normally distributed. PFOS/PFOA were present in 94.6% and 96.4% of the samples respectively. Concentrations ranged fro...

  4. Calibration of NMR well logs from carbonate reservoirs with laboratory NMR measurements and μXRCT

    DOE PAGES

    Mason, Harris E.; Smith, Megan M.; Hao, Yue; ...

    2014-12-31

    The use of nuclear magnetic resonance (NMR) well log data has the potential to provide in-situ porosity, pore size distributions, and permeability of target carbonate CO₂ storage reservoirs. However, these methods which have been successfully applied to sandstones have yet to be completely validated for carbonate reservoirs. Here, we have taken an approach to validate NMR measurements of carbonate rock cores with independent measurements of permeability and pore surface area to volume (S/V) distributions using differential pressure measurements and micro X-ray computed tomography (μXRCT) imaging methods, respectively. We observe that using standard methods for determining permeability from NMR data incorrectlymore » predicts these values by orders of magnitude. However, we do observe promise that NMR measurements provide reasonable estimates of pore S/V distributions, and with further independent measurements of the carbonate rock properties that universally applicable relationships between NMR measured properties may be developed for in-situ well logging applications of carbonate reservoirs.« less

  5. Calibration of NMR well logs from carbonate reservoirs with laboratory NMR measurements and μXRCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, Harris E.; Smith, Megan M.; Hao, Yue

    The use of nuclear magnetic resonance (NMR) well log data has the potential to provide in-situ porosity, pore size distributions, and permeability of target carbonate CO₂ storage reservoirs. However, these methods which have been successfully applied to sandstones have yet to be completely validated for carbonate reservoirs. Here, we have taken an approach to validate NMR measurements of carbonate rock cores with independent measurements of permeability and pore surface area to volume (S/V) distributions using differential pressure measurements and micro X-ray computed tomography (μXRCT) imaging methods, respectively. We observe that using standard methods for determining permeability from NMR data incorrectlymore » predicts these values by orders of magnitude. However, we do observe promise that NMR measurements provide reasonable estimates of pore S/V distributions, and with further independent measurements of the carbonate rock properties that universally applicable relationships between NMR measured properties may be developed for in-situ well logging applications of carbonate reservoirs.« less

  6. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  7. SAHARA: A package of PC computer programs for estimating both log-hyperbolic grain-size parameters and standard moments

    NASA Astrophysics Data System (ADS)

    Christiansen, Christian; Hartmann, Daniel

    This paper documents a package of menu-driven POLYPASCAL87 computer programs for handling grouped observations data from both sieving (increment data) and settling tube procedures (cumulative data). The package is designed deliberately for use on IBM-compatible personal computers. Two of the programs solve the numerical problem of determining the estimates of the four (main) parameters of the log-hyperbolic distribution and their derivatives. The package also contains a program for determining the mean, sorting, skewness. and kurtosis according to the standard moments. Moreover, the package contains procedures for smoothing and grouping of settling tube data. A graphic part of the package plots the data in a log-log plot together with the estimated log-hyperbolic curve. Along with the plot follows all estimated parameters. Another graphic option is a plot of the log-hyperbolic shape triangle with the (χ,ζ) position of the sample.

  8. Improved Root Normal Size Distributions for Liquid Atomization

    DTIC Science & Technology

    2015-11-01

    Jackson, Primary Breakup of Round Aerated- Liquid Jets in Supersonic Crossflows, Atomization and Sprays, 16(6), 657-672, 2006 H. C. Simmons, The...Breakup in Liquid - Gas Mixing Layers, Atomization and Sprays, 1, 421-440, 1991 P.-K. Wu, L.-K. Tseng, and G. M. Faeth, Primary Breakup in Gas / Liquid ...Improved Root Normal Size Distributions for Liquid Atomization Distribution Statement A. Approved for public release; distribution is unlimited

  9. A novel gamma-fitting statistical method for anti-drug antibody assays to establish assay cut points for data with non-normal distribution.

    PubMed

    Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena

    2010-01-31

    In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.

  10. Experimental and simulation studies on the behavior of signal harmonics in magnetic particle imaging.

    PubMed

    Murase, Kenya; Konishi, Takashi; Takeuchi, Yuki; Takata, Hiroshige; Saito, Shigeyoshi

    2013-07-01

    Our purpose in this study was to investigate the behavior of signal harmonics in magnetic particle imaging (MPI) by experimental and simulation studies. In the experimental studies, we made an apparatus for MPI in which both a drive magnetic field (DMF) and a selection magnetic field (SMF) were generated with a Maxwell coil pair. The MPI signals from magnetic nanoparticles (MNPs) were detected with a solenoid coil. The odd- and even-numbered harmonics were calculated by Fourier transformation with or without background subtraction. The particle size of the MNPs was measured by transmission electron microscopy (TEM), dynamic light-scattering, and X-ray diffraction methods. In the simulation studies, the magnetization and particle size distribution of MNPs were assumed to obey the Langevin theory of paramagnetism and a log-normal distribution, respectively. The odd- and even-numbered harmonics were calculated by Fourier transformation under various conditions of DMF and SMF and for three different particle sizes. The behavior of the harmonics largely depended on the size of the MNPs. When we used the particle size obtained from the TEM image, the simulation results were most similar to the experimental results. The similarity between the experimental and simulation results for the even-numbered harmonics was better than that for the odd-numbered harmonics. This was considered to be due to the fact that the odd-numbered harmonics were more sensitive to background subtraction than were the even-numbered harmonics. This study will be useful for a better understanding, optimization, and development of MPI and for designing MNPs appropriate for MPI.

  11. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    NASA Astrophysics Data System (ADS)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  12. Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging

    NASA Astrophysics Data System (ADS)

    Wang, Zong; Shi, Wenjiao

    2017-03-01

    Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.

  13. Accurate computation of survival statistics in genome-wide studies.

    PubMed

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J; Upfal, Eli

    2015-05-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.

  14. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  15. Property Improvement in CZT via Modeling and Processing Innovations . Te-particles in vertical gradient freeze CZT: Size and Spatial Distributions and Constitutional Supercooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henager, Charles H.; Alvine, Kyle J.; Bliss, Mary

    2014-10-01

    A section of a vertical gradient freeze CZT boule approximately 2100-mm 3 with a planar area of 300-mm 2 was prepared and examined using transmitted IR microscopy at various magnifications to determine the three-dimensional spatial and size distributions of Te-particles over large longitudinal and radial length scales. The boule section was approximately 50-mm wide by 60-mm in length by 7-mm thick and was doubly polished for TIR work. Te-particles were imaged through the thickness using extended focal imaging to locate the particles in thickness planes spaced 15-µm apart and then in plane of the image using xy-coordinates of the particlemore » center of mass so that a true three dimensional particle map was assembled for a 1-mm by 45-mm longitudinal strip and for a 1-mm by 50-mm radial strip. Te-particle density distributions were determined as a function of longitudinal and radial positions in these strips, and treating the particles as vertices of a network created a 3D image of the particle spatial distribution. Te-particles exhibited a multi-modal log-normal size density distribution that indicated a slight preference for increasing size with longitudinal growth time, while showing a pronounced cellular network structure throughout the boule that can be correlated to dislocation network sizes in CZT. Higher magnification images revealed a typical Rayleigh-instability pearl string morphology with large and small satellite droplets. This study includes solidification experiments in small crucibles of 30:70 mixtures of Cd:Te to reduce the melting point below 1273 K (1000°C). These solidification experiments were performed over a wide range of cooling rates and clearly demonstrated a growth instability with Te-particle capture that is suggested to be responsible for one of the peaks in the size distribution using size discrimination visualization. The results are discussed with regard to a manifold Te-particle genesis history as 1) Te-particle direct capture from melt-solid growth instabilities, 2) Te-particle formation from dislocation core diffusion and the formation and breakup of Te-tubes, and 3) Te-particle formation due to classical nucleation and growth as precipitates.« less

  16. Particle size dependence of heating power in MgFe2O4 nanoparticles for hyperthermia therapy application

    NASA Astrophysics Data System (ADS)

    Reza Barati, Mohammad; Selomulya, Cordelia; Suzuki, Kiyonori

    2014-05-01

    Magnetic nanoparticles with narrow size distributions have successfully been synthesized by an ultrasonic assisted co-precipitation method. The effects of particle size on magnetic properties, heat generation by AC fields, and the cell cytotoxicity were investigated for MgFe2O4 nanoparticles with mean diameters varying from 7 ± 0.5 nm to 29 ± 1 nm. The critical size for superparamagnetic to ferrimagnetic transition (DS→F) of MgFe2O4 was determined to be about 13 ± 0.5 nm at 300 K. The specific absorption rate (SAR) of MgFe2O4 nanoparticles was strongly size dependent; it showed a maximum value of 19 W/g when the particle size was 10 ± 0.5 nm at which the Néel and Brownian relaxations are the major cause of heating. The SAR value was suppressed dramatically by 46% with increasing particle size from 10 ± 0.5 nm to 13 ± 0.5 nm, where Néel relaxation slows down and SAR results primarily from Brownian relaxation loss. A further reduction in SAR value was evident when the size was increased from 13 ± 0.5 nm to 16 ± 1 nm, where the superparamagnetic to ferromagnetic transition occurs. However, SAR showed a tendency to increase with particle size again above 16 ± 1 nm where hysteresis loss becomes the dominant mechanism of heat generation. The particle size dependence of SAR in the superparamagnetic region was well described by considering the effective relaxation time estimated based on a log-normal size distribution. The clear size dependence of SAR is attributable to the high degree of monodispersity of particles synthesized here. The high SAR value of water-based MgFe2O4 magnetic suspension combined with low cell cytotoxicity suggests a great potential of MgFe2O4 nanoparticles for magnetic hyperthermia therapy applications.

  17. Size distribution and sorption of polychlorinated biphenyls during haze episodes

    NASA Astrophysics Data System (ADS)

    Zhu, Qingqing; Liu, Guorui; Zheng, Minghui; Zhang, Xian; Gao, Lirong; Su, Guijin; Liang, Yong

    2018-01-01

    There is a lack of studies on the size distribution of polychlorinated biphenyls (PCBs) during haze days, and their sorption mechanisms on aerosol particles remain unclear. In this study, PCBs in particle-sized aerosols from urban atmospheres of Beijing, China were investigated during haze and normal days. The concentrations, gas/particle partitioning, size distribution, and associated human daily intake of PCBs via inhalation were compared during haze days and normal days. Compared with normal days, higher particle mass-associated PCB levels were measured during haze days. The concentrations of ∑PCBs in particulate fractions were 11.9-134 pg/m3 and 6.37-14.9 pg/m3 during haze days and normal days, respectively. PCBs increased with decreasing particle size (>10 μm, 10-2.5 μm, 2.5-1.0 μm, and ≤1.0 μm). During haze days, PCBs were overwhelmingly associated with a fine particle fraction of ≤1.0 μm (64.6%), while during normal days the contribution was 33.7%. Tetra-CBs were the largest contributors (51.8%-66.7%) both in the gas and particle fractions during normal days. The profiles in the gas fraction were conspicuously different than those in the PM fractions during haze days, with di-CBs predominating in the gas fraction and higher homologues (tetra-CBs, penta-CBs, and hexa-CBs) concurrently accounting for most of the PM fractions. The mean-normalized size distributions of particulate mass and PCBs exhibited unimodal patterns, and a similar trend was observed for PCBs during both days. They all tended to be in the PM fraction of 1.0-2.5 μm. Adsorption might be the predominating mechanism for the gas-particle partitioning of PCBs during haze days, whereas absorption might be dominative during normal days.

  18. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less

  19. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition

    PubMed Central

    Zheng, Xiliang; Wang, Jin

    2015-01-01

    We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453

  20. The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud

    NASA Astrophysics Data System (ADS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.

  1. New View on Quiet-Sun Photospheric Dynamics Offered by NST Data

    NASA Astrophysics Data System (ADS)

    Abramenko, Valentyna; Yurchyshyn, V.; Goode, P. R.

    2011-05-01

    Recent observations of the quiet sun photosphere obtained with the 1.6 meter New Solar telescope (NST) of Big Bear Solar observatory (BBSO) delivered new information about photospheric fine structures and their dynamics, as well as posing new questions. The 2-hour uninterrupted data set of solar granulation obtained under excellent seeing conditions on August 3, 2010 (with cadence of 10 sec) was the basis for the study. Statistical analysis of automatically detected and tracked magnetic bright points (MBPs) showed that the MBPs population monotonically increases as their size decreases, down to 60-70 km. Our analysis shows that if the smallest magnetic flux tubes exist, their size is still smaller that 60-70 km, which impose strong restrictions on the modeling of these structures. We also found that the distributions of the MBP's size and lifetime do not follow a traditional Gaussian distribution, typical for random processes. Instead, it follows a log-normal distribution, typical for avalanches, catastrophes, stock market data, etc. Our data set also demonstrated that a majority (98.6 %) of MBPs are short live (<2 min). This remarkable fact was not obvious from previous studies because an extremely high time cadence was required. The fact indicates that the majority of MBPs appear for a very short time (tens of seconds), similar to other transient features, for example, chromospheric jets. The most important point here is that these small and short living MBPs significantly increase dynamics (flux emergence, collapse into MBPs, and magnetic flux recycling) of the solar surface magnetic fields.

  2. A cross-site comparison of methods used for hydrogeologic characterization of the Galena-Platteville aquifer in Illinois and Wisconsin, with examples from selected Superfund sites

    USGS Publications Warehouse

    Kay, Robert T.; Mills, Patrick C.; Dunning, Charles P.; Yeskis, Douglas J.; Ursic, James R.; Vendl, Mark

    2004-01-01

    The effectiveness of 28 methods used to characterize the fractured Galena-Platteville aquifer at eight sites in northern Illinois and Wisconsin is evaluated. Analysis of government databases, previous investigations, topographic maps, aerial photographs, and outcrops was essential to understanding the hydrogeology in the area to be investigated. The effectiveness of surface-geophysical methods depended on site geology. Lithologic logging provided essential information for site characterization. Cores were used for stratigraphy and geotechnical analysis. Natural-gamma logging helped identify the effect of lithology on the location of secondary- permeability features. Caliper logging identified large secondary-permeability features. Neutron logs identified trends in matrix porosity. Acoustic-televiewer logs identified numerous secondary-permeability features and their orientation. Borehole-camera logs also identified a number of secondary-permeability features. Borehole ground-penetrating radar identified lithologic and secondary-permeability features. However, the accuracy and completeness of this method is uncertain. Single-point-resistance, density, and normal resistivity logs were of limited use. Water-level and water-quality data identified flow directions and indicated the horizontal and vertical distribution of aquifer permeability and the depth of the permeable features. Temperature, spontaneous potential, and fluid-resistivity logging identified few secondary-permeability features at some sites and several features at others. Flowmeter logging was the most effective geophysical method for characterizing secondary-permeability features. Aquifer tests provided insight into the permeability distribution, identified hydraulically interconnected features, the presence of heterogeneity and anisotropy, and determined effective porosity. Aquifer heterogeneity prevented calculation of accurate hydraulic properties from some tests. Different methods, such as flowmeter logging and slug testing, occasionally produced different interpretations. Aquifer characterization improved with an increase in the number of data points, the period of data collection, and the number of methods used.

  3. Universal noise and Efimov physics

    NASA Astrophysics Data System (ADS)

    Nicholson, Amy N.

    2016-03-01

    Probability distributions for correlation functions of particles interacting via random-valued fields are discussed as a novel tool for determining the spectrum of a theory. In particular, this method is used to determine the energies of universal N-body clusters tied to Efimov trimers, for even N, by investigating the distribution of a correlation function of two particles at unitarity. Using numerical evidence that this distribution is log-normal, an analytical prediction for the N-dependence of the N-body binding energies is made.

  4. Transmitting Information by Propagation in an Ocean Waveguide: Computation of Acoustic Field Capacity

    DTIC Science & Technology

    2015-06-17

    progress, Eq. (4) is evaluated in terms of the differential entropy h. The integrals can be identified as differential entropy terms by expanding the log...all ran- dom vectors p with a given covariance matrix, the entropy of p is maximized when p is ZMCSCG since a normal distribution maximizes the... entropy over all distributions with the same covariance [9, 18], implying that this is the optimal distribution on s as well. In addition, of all the

  5. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.

  6. The Effect of Dioptric Blur on Reading Performance

    PubMed Central

    Chung, Susana T.L.; Jarvis, Samuel H.; Cheung, Sing-Hang

    2013-01-01

    Little is known about the systematic impact of blur on reading performance. The purpose of this study was to quantify the effect of dioptric blur on reading performance in a group of normally sighted young adults. We measured monocular reading performance and visual acuity for 19 observers with normal vision, for five levels of optical blur (no blur, 0.5, 1, 2 and 3D). Dioptric blur was induced using convex trial lenses placed in front of the testing eye, with the pupil dilated and in the presence of a 3 mm artificial pupil. Reading performance was assessed using eight versions of the MNREAD Acuity Chart. For each level of dioptric blur, observers read aloud sentences on one of these charts, from large to small print. Reading time for each sentence and the number of errors made were recorded and converted to reading speed in words per minute. Visual acuity was measured using 4-orientation Landolt C stimuli. For all levels of dioptric blur, reading speed increased with print size up to a certain print size and then remained constant at the maximum reading speed. By fitting nonlinear mixed-effects models, we found that the maximum reading speed was minimally affected by blur up to 2D, but was ~23% slower for 3D of blur. When the amount of blur increased from 0 (no-blur) to 3D, the threshold print size (print size corresponded to 80% of the maximum reading speed) increased from 0.01 to 0.88 logMAR, reading acuity worsened from −0.16 to 0.58 logMAR, and visual acuity worsened from −0.19 to 0.64 logMAR. The similar rates of change with blur for threshold print size, reading acuity and visual acuity implicates that visual acuity is a good predictor of threshold print size and reading acuity. Like visual acuity, reading performance is susceptible to the degrading effect of optical blur. For increasing amount of blur, larger print sizes are required to attain the maximum reading speed. PMID:17442363

  7. Model selection for identifying power-law scaling.

    PubMed

    Ton, Robert; Daffertshofer, Andreas

    2016-08-01

    Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Robustness of disaggregate oil and gas discovery forecasting models

    USGS Publications Warehouse

    Attanasi, E.D.; Schuenemeyer, J.H.

    1989-01-01

    The trend in forecasting oil and gas discoveries has been to develop and use models that allow forecasts of the size distribution of future discoveries. From such forecasts, exploration and development costs can more readily be computed. Two classes of these forecasting models are the Arps-Roberts type models and the 'creaming method' models. This paper examines the robustness of the forecasts made by these models when the historical data on which the models are based have been subject to economic upheavals or when historical discovery data are aggregated from areas having widely differing economic structures. Model performance is examined in the context of forecasting discoveries for offshore Texas State and Federal areas. The analysis shows how the model forecasts are limited by information contained in the historical discovery data. Because the Arps-Roberts type models require more regularity in discovery sequence than the creaming models, prior information had to be introduced into the Arps-Roberts models to accommodate the influence of economic changes. The creaming methods captured the overall decline in discovery size but did not easily allow introduction of exogenous information to compensate for incomplete historical data. Moreover, the predictive log normal distribution associated with the creaming model methods appears to understate the importance of the potential contribution of small fields. ?? 1989.

  9. Assessment of Methane Emissions from Oil and Gas Production Pads using Mobile Measurements

    EPA Science Inventory

    Journal Article Abstract --- "A mobile source inspection approach called OTM 33A was used to quantify short-term methane emission rates from 218 oil and gas production pads in Texas, Colorado, and Wyoming from 2010 to 2013. The emission rates were log-normally distributed with ...

  10. Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam

    NASA Astrophysics Data System (ADS)

    N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.

    In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3

  11. Statistical approaches for the determination of cut points in anti-drug antibody bioassays.

    PubMed

    Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A

    2015-03-01

    Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. A general approach to double-moment normalization of drop size distributions

    NASA Astrophysics Data System (ADS)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  13. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Reconstruction of doses and deposition in the western trace from the Chernobyl accident.

    PubMed

    Sikkeland, T; Skuterud, L; Goltsova, N I; Lindmo, T

    1997-05-01

    A model is presented for the explosive cloud of particulates that produced the western trace of high radioactive ground contamination in the Chernobyl accident on 26 April 1986. The model was developed to reproduce measured dose rates and nuclide contamination and to relate estimated doses to observed changes in: (1) infrared emission from the foliage and (2) morphological and histological structures of individual pines. Dominant factors involved in ground contamination were initial cloud shape, particle size distribution, and rate of particle fallout. At time of formation, the cloud was assumed to be parabolical and to contain a homogeneous distribution of spherically shaped fuel particulates having a log-normal size distribution. The particulates were dispersed by steady winds and diffusion that produced a straight line deposition path. The analysis indicates that two clouds, denoted by Cloud I and Cloud II, were involved. Fallout from the former dominated the far field region and fallout from latter the region near the reactor. At formation they had a full width at half maximum of 1800 m and 500 m, respectively. For wind velocities of 5-10 m s(-1) the particulates' radial distribution at formation had a standard deviation and mode of 1.8 microm and 0.5 microm, respectively. This distribution corresponds to a release of 390 GJ in the runaway explosion. The clouds' height and mass are not uniquely determined but are coupled together. For an initial height of 3,600 m, Cloud I contained about 400 kg fuel. For Cloud II the values were, respectively, 1,500 m and 850 kg. Loss of activities from the clouds is found to be small. Values are obtained for the rate of radionuclide migration from the deposit. Various types of biological damage to pines, as reported in the literature, are shown to be mainly due to ionizing radiation from the deposit by Cloud II. A formula is presented for the particulate size distribution in the trace area.

  15. Spin Polarization and Quantum Spins in Au Nanoparticles

    PubMed Central

    Li, Chi-Yen; Karna, Sunil K.; Wang, Chin-Wei; Li, Wen-Hsien

    2013-01-01

    The present study focuses on investigating the magnetic properties and the critical particle size for developing sizable spontaneous magnetic moment of bare Au nanoparticles. Seven sets of bare Au nanoparticle assemblies, with diameters from 3.5 to 17.5 nm, were fabricated with the gas condensation method. Line profiles of the X-ray diffraction peaks were used to determine the mean particle diameters and size distributions of the nanoparticle assemblies. The magnetization curves M(Ha) reveal Langevin field profiles. Magnetic hysteresis was clearly revealed in the low field regime even at 300 K. Contributions to the magnetization from different size particles in the nanoparticle assemblies were considered when analyzing the M(Ha) curves. The results show that the maximum particle moment will appear in 2.4 nm Au particles. A similar result of the maximum saturation magnetization appearing in 2.3 nm Au particles is also concluded through analysis of the dependency of the saturation magnetization MP on particle size. The MP(d) curve departs significantly from the 1/d dependence, but can be described by a log-normal function. Magnetization can be barely detected for Au particles larger than 27 nm. Magnetic field induced Zeeman magnetization from the quantum confined Kubo gap opening appears in Au nanoparticles smaller than 9.5 nm in diameter. PMID:23989607

  16. A Versatile Methodology Using Sol-Gel, Supercritical Extraction, and Etching to Fabricate a Nitramine Explosive: Nanometer HNIW

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Song, Xiaolan; Song, Dan; Jiang, Wei; Liu, Hongying; Li, Fengsheng

    2013-01-01

    A combinative method with three steps was developed to fabricate HNIW (2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtziane) nanoexplosives with the gas anti-solvent (GAS) method improved by introducing a gel frame to limit the overgrowth of recrystallized particles and an acid-assistant to remove the used frame. Forming the mixed gel, by locking the explosive solution into a wet gel whose volume was divided by the networks, was the key for the fabrication. As demonstrated by scanning electron microscopy (SEM) analysis, a log-normal size distribution of nano-HNIW indicated that about 74.4% of the particles had sizes <120 nm and maximum particle size was ∼300 nm. Energy-dispersive X-ray spectroscopy (EDS) and infrared (IR) characterizations showed that the aerogel embedded with nanoexplosive particles was dissolved in hydrochloric acid solution, and the raw ɛ-HNIW was mostly transformed into the α phase (nano-HNIW) during recrystallization. Nano-HNIW exhibited impact and friction sensitivity almost equal to those of raw HNIW, within experimental error. Thermal analysis showed that the decomposition peak temperature decreased by more than 10°C and that the heat release increased by 42.5% when the particle size of HNIW was at the nanometer scale.

  17. Discrete hierarchy of sizes and performances in the exchange-traded fund universe

    NASA Astrophysics Data System (ADS)

    Vandermarliere, B.; Ryckebusch, J.; Schoors, K.; Cauwels, P.; Sornette, D.

    2017-03-01

    Using detailed statistical analyses of the size distribution of a universe of equity exchange-traded funds (ETFs), we discover a discrete hierarchy of sizes, which imprints a log-periodic structure on the probability distribution of ETF sizes that dominates the details of the asymptotic tail. This allows us to propose a classification of the studied universe of ETFs into seven size layers approximately organized according to a multiplicative ratio of 3.5 in their total market capitalization. Introducing a similarity metric generalizing the Herfindhal index, we find that the largest ETFs exhibit a significantly stronger intra-layer and inter-layer similarity compared with the smaller ETFs. Comparing the performance across the seven discerned ETF size layers, we find an inverse size effect, namely large ETFs perform significantly better than the small ones both in 2014 and 2015.

  18. Distributions of polycyclic aromatic hydrocarbons in surface waters, sediments and soils of Hangzhou City, China.

    PubMed

    Chen, Baoliang; Xuan, Xiaodong; Zhu, Lizhong; Wang, Jing; Gao, Yanzheng; Yang, Kun; Shen, Xueyou; Lou, Baofeng

    2004-09-01

    Ten polycyclic aromatic hydrocarbons (PAHs) were simultaneously measured in 17 surface water samples and 11 sediments of four water bodies, and 3 soils near the water-body bank in Hangzhou, China in December 2002. It was observed that the sum of PAHs concentrations ranged from 0.989 to 9.663 microg/L in surface waters, from 132.7 to 7343 ng/g dry weight in sediments, and from 59.71 to 615.8 ng/g dry weight in soils. The composition pattern of PAHs by ring size in water, sediment and soil were surveyed. Three-ring PAHs were dominated in surface waters and soils, meanwhile sediments were mostly dominated by four-ring PAHs. Furthermore, PAHs apparent distribution coefficients (K(d)) and solid f(oc)-normalized K(d) (e.g. K(oc)= K(d) / f(oc)) were calculated. The relationship between logK(oc) and logK(ow) of PAHs for field data on sediments and predicted values were compared. The sources of PAHs in different water bodies were evaluated by comparison of K (oc) values in sediments of the river downstream with that in soils. Hangzhou section of the Great Canal was heavily polluted by PAHs released from industrial wastewater in the past and now PAHs in sediment may serve as sources of PAHs in surface water. PAHs in Qiantang River were contributed from soil runoff. Municipal road runoff was mostly contributed to West Lake PAHs.

  19. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  20. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.

    PubMed

    Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L

    2011-10-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.

  1. Determining the Diversity and Species Abundance Patterns in Arctic Soils using Rational Methods for Exploring Microbial Diversity

    NASA Astrophysics Data System (ADS)

    Ovreas, L.; Quince, C.; Sloan, W.; Lanzen, A.; Davenport, R.; Green, J.; Coulson, S.; Curtis, T.

    2012-12-01

    Arctic microbial soil communities are intrinsically interesting and poorly characterised. We have inferred the diversity and species abundance distribution of 6 Arctic soils: new and mature soil at the foot of a receding glacier, Arctic Semi Desert, the foot of bird cliffs and soil underlying Arctic Tundra Heath: all near Ny-Ålesund, Spitsbergen. Diversity, distribution and sample sizes were estimated using the rational method of Quince et al., (Isme Journal 2 2008:997-1006) to determine the most plausible underlying species abundance distribution. A log-normal species abundance curve was found to give a slightly better fit than an inverse Gaussian curve if, and only if, sequencing error was removed. The median estimates of diversity of operational taxonomic units (at the 3% level) were 3600-5600 (lognormal assumed) and 2825-4100 (inverse Gaussian assumed). The nature and origins of species abundance distributions are poorly understood but may yet be grasped by observing and analysing such distributions in the microbial world. The sample size required to observe the distribution (by sequencing 90% of the taxa) varied between ~ 106 and ~105 for the lognormal and inverse Gaussian respectively. We infer that between 5 and 50 GB of sequencing would be required to capture 90% or the metagenome. Though a principle components analysis clearly divided the sites into three groups there was a high (20-45%) degree of overlap in between locations irrespective of geographical proximity. Interestingly, the nearest relatives of the most abundant taxa at a number of most sites were of alpine or polar origin. Samples plotted on first two principal components together with arbitrary discriminatory OTUs

  2. Inequality and City Size*

    PubMed Central

    Baum-Snow, Nathaniel; Pavan, Ronni

    2013-01-01

    Between 1979 and 2007 a strong positive monotonic relationship between wage inequality and city size has developed. This paper investigates the links between this emergent city size inequality premium and the contemporaneous nationwide increase in wage inequality. After controlling for the skill composition of the workforce across cities of different sizes, we show that at least 23 percent of the overall increase in the variance of log hourly wages in the United States from 1979 to 2007 is explained by the more rapid growth in the variance of log wages in larger locations relative to smaller locations. This influence occurred throughout the wage distribution and was most prevalent during the 1990s. More rapid growth in within skill group inequality in larger cities has been by far the most important force driving these city size specific patterns in the data. Differences in the industrial composition of cities of different sizes explain up to one-third of this city size effect. These results suggest an important role for agglomeration economies in generating changes in the wage structure during the study period. PMID:24954958

  3. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  4. A Model for Hydraulic Properties Based on Angular Pores with Lognormal Size Distribution

    NASA Astrophysics Data System (ADS)

    Durner, W.; Diamantopoulos, E.

    2014-12-01

    Soil water retention and unsaturated hydraulic conductivity curves are mandatory for modeling water flow in soils. It is a common approach to measure few points of the water retention curve and to calculate the hydraulic conductivity curve by assuming that the soil can be represented as a bundle of capillary tubes. Both curves are then used to predict water flow at larger spatial scales. However, the predictive power of these curves is often very limited. This can be very easily illustrated if we measure the soil hydraulic properties (SHPs) for a drainage experiment and then use these properties to predict the water flow in the case of imbibition. Further complications arise from the incomplete wetting of water at the solid matrix which results in finite values of the contact angles between the solid-water-air interfaces. To address these problems we present a physically-based model for hysteretic SHPs. This model is based on bundles of angular pores. Hysteresis for individual pores is caused by (i) different snap-off pressures during filling and emptying of single angular pores and (ii) by different advancing and receding contact angles for fluids that are not perfectly wettable. We derive a model of hydraulic conductivity as a function of contact angle by assuming flow perpendicular to pore cross sections and present closed-form expressions for both the sample scale water retention and hydraulic conductivity function by assuming a log-normal statistical distribution of pore size. We tested the new model against drainage and imbibition experiments for various sandy materials which were conducted with various liquids of differing wettability. The model described both imbibition and drainage experiments very well by assuming a unique pore size distribution of the sample and a zero contact angle for the perfectly wetting liquid. Eventually, we see the possibility to relate the particle size distribution with a model which describes the SHPs.

  5. Effects of the turnover rate on the size distribution of firms: An application of the kinetic exchange models

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Anindya S.

    2012-12-01

    We address the issue of the distribution of firm size. To this end we propose a model of firms in a closed, conserved economy populated with zero-intelligence agents who continuously move from one firm to another. We then analyze the size distribution and related statistics obtained from the model. There are three well known statistical features obtained from the panel study of the firms i.e., the power law in size (in terms of income and/or employment), the Laplace distribution in the growth rates and the slowly declining standard deviation of the growth rates conditional on the firm size. First, we show that the model generalizes the usual kinetic exchange models with binary interaction to interactions between an arbitrary number of agents. When the number of interacting agents is in the order of the system itself, it is possible to decouple the model. We provide exact results on the distributions which are not known yet for binary interactions. Our model easily reproduces the power law for the size distribution of firms (Zipf’s law). The fluctuations in the growth rate falls with increasing size following a power law (though the exponent does not match with the data). However, the distribution of the difference of the firm size in this model has Laplace distribution whereas the real data suggests that the difference of the log of sizes has the same distribution.

  6. Plume particle collection and sizing from static firing of solid rocket motors

    NASA Technical Reports Server (NTRS)

    Sambamurthi, Jay K.

    1995-01-01

    A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.

  7. Fatigue Shifts and Scatters Heart Rate Variability in Elite Endurance Athletes

    PubMed Central

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    Purpose This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in ‘fatigue’ or in ‘no-fatigue’ state in ‘real life’ conditions. Methods 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms2 and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). Results 172 trials were identified as in a ‘fatigue’ and 891 as in ‘no-fatigue’ state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between ‘fatigue’ and ‘no-fatigue’: HRSU (+6.27±0.61 bpm), logTPSU (−0.36±0.04), logLFSU (−0.27±0.04), logHFSU (−0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (−9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (−0.28±0.03), logLFST (−0.29±0.03), logHFST (−0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the ‘fatigue’ state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). Conclusion HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern. PMID:23951198

  8. Analysis of Mount St. Helens ash from optical photoelectric photometry

    NASA Technical Reports Server (NTRS)

    Cardelli, J. A.; Ackerman, T. P.

    1983-01-01

    The optical properties of suspended dust particles from the eruption of Mt. St. Helens on July 23, 1980 are investigated using photoelectric observations of standard stars obtained on the 0.76-m telescope at the University of Washington 48 hours after the eruption. Measurements were made with five broad-band filters centered at 3910, 5085, 5480, 6330, and 8050 A on stars of varying color and over a wide range of air masses. Anomalous extinction effects due to the volcanic ash were detected, and a significant change in the wavelength-dependent extinction parameter during the course of the observations was established by statistical analysis. Mean particle size (a) and column density (N) are estimated using the Mie theory, assuming a log-normal particle-size distribution: a = 0.18 micron throughout; N = 1.02 x 10 to the 9th/sq cm before 7:00 UT and 2.33 x 10 to the 9th/sq cm after 8:30 UT on July 25, 1980. The extinction is attributed to low-level, slowly migrating ash, possibly combined with products of gas-to-particle conversion and coagulation.

  9. Performance Analysis of Amplify-and-Forward Relaying FSO/SC-QAM Systems over Weak Turbulence Channels and Pointing Error Impairments

    NASA Astrophysics Data System (ADS)

    Trung, Ha Duyen

    2017-12-01

    In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.

  10. Analysis of albedo versus cloud fraction relationships in liquid water clouds using heuristic models and large eddy simulation

    NASA Astrophysics Data System (ADS)

    Feingold, Graham; Balsells, Joseph; Glassmeier, Franziska; Yamaguchi, Takanobu; Kazil, Jan; McComiskey, Allison

    2017-07-01

    The relationship between the albedo of a cloudy scene A and cloud fraction fc is studied with the aid of heuristic models of stratocumulus and cumulus clouds. Existing work has shown that scene albedo increases monotonically with increasing cloud fraction but that the relationship varies from linear to superlinear. The reasons for these differences in functional dependence are traced to the relationship between cloud deepening and cloud widening. When clouds deepen with no significant increase in fc (e.g., in solid stratocumulus), the relationship between A and fc is linear. When clouds widen as they deepen, as in cumulus cloud fields, the relationship is superlinear. A simple heuristic model of a cumulus cloud field with a power law size distribution shows that the superlinear A-fc behavior is traced out either through random variation in cloud size distribution parameters or as the cloud field oscillates between a relative abundance of small clouds (steep slopes on a log-log plot) and a relative abundance of large clouds (flat slopes). Oscillations of this kind manifest in large eddy simulation of trade wind cumulus where the slope and intercept of the power law fit to the cloud size distribution are highly correlated. Further analysis of the large eddy model-generated cloud fields suggests that cumulus clouds grow larger and deeper as their underlying plumes aggregate; this is followed by breakup of large plumes and a tendency to smaller clouds. The cloud and thermal size distributions oscillate back and forth approximately in unison.

  11. Compacting biomass waste materials for use as fuel

    NASA Astrophysics Data System (ADS)

    Zhang, Ou

    Every year, biomass waste materials are produced in large quantity. The combustibles in biomass waste materials make up over 70% of the total waste. How to utilize these waste materials is important to the nation and the world. The purpose of this study is to test optimum processes and conditions of compacting a number of biomass waste materials to form a densified solid fuel for use at coal-fired power plants or ordinary commercial furnaces. Successful use of such fuel as a substitute for or in cofiring with coal not only solves a solid waste disposal problem but also reduces the release of some gases from burning coal which cause health problem, acid rain and global warming. The unique punch-and-die process developed at the Capsule Pipeline Research Center, University of Missouri-Columbia was used for compacting the solid wastes, including waste paper, plastics (both film and hard products), textiles, leaves, and wood. The compaction was performed to produce strong compacts (biomass logs) under room temperature without binder and without preheating. The compaction conditions important to the commercial production of densified biomass fuel logs, including compaction pressure, pressure holding time, back pressure, moisture content, particle size, binder effects, and mold conditions were studied and optimized. The properties of the biomass logs were evaluated in terms of physical, mechanical, and combustion characteristics. It was found that the compaction pressure and the initial moisture content of the biomass material play critical roles in producing high-quality biomass logs. Under optimized compaction conditions, biomass waste materials can be compacted into high-quality logs with a density of 0.8 to 1.2 g/cm3. The logs made from the combustible wastes have a heating value in the range 6,000 to 8,000 Btu/lb which is only slightly (10 to 30%) less than that of subbituminous coal. To evaluate the feasibility of cofiring biomass logs with coal, burn tests were conducted in a stoke boiler. A separate burning test was also carried out by burning biomass logs alone in an outdoor hot-water furnace for heating a building. Based on a previous coal compaction study, the process of biomass compaction was studied numerically by use of a non-linear finite element code. A constitutive model with sufficient generality was adapted for biomass material to deal with pore contraction during compaction. A contact node algorithm was applied to implement the effect of mold wall friction into the finite element program. Numerical analyses were made to investigate the pressure distribution in a die normal to the axis of compaction, and to investigate the density distribution in a biomass log after compaction. The results of the analyses gave generally good agreement with theoretical analysis of coal log compaction, although assumptions had to be made about the variation in the elastic modulus of the material and the Poisson's ratio during the compaction cycle.

  12. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  13. Sawlog sizes: a comparison in two Appalachian areas

    Treesearch

    Curtis D. Goho; A. Jeff Martin

    1973-01-01

    Frequency distributions of log diameter and length were prepared for eight Appalachian hardwood species. Data obtained in Ohio, Kentucky, and Tennessee, were compared with information collected previously from West Virginia and New England. With the exception of red oak, significant regional differences were found.

  14. UV missile-plume signature model

    NASA Astrophysics Data System (ADS)

    Roblin, Antoine; Baudoux, Pierre E.; Chervet, Patrick

    2002-08-01

    A new 3D radiative code is used to solve the radiative transfer equation in the UV spectral domain for a nonequilibrium and axisymmetric media such as a rocket plume composed of hot reactive gases and metallic oxide particles like alumina. Calculations take into account the dominant chemiluminescence radiation mechanism and multiple scattering effects produced by alumina particles. Plume radiative properties are studied by using a simple cylindrical media of finite length, deduced from different aerothermochemical real rocket plume afterburning zones. Assumed a log-normal size distribution of alumina particles, optical properties are calculated by using Mie theory. Due to large uncertainties of particles properties, systematic tests have been performed in order to evaluate the influence of the different input data (refractive index, particle mean geometric radius) upon the radiance field. These computations will help us to define the set of parameters which need to be known accurately in order to compare computations with radiance measurements obtained during field experiments.

  15. The Faraday effect of natural and artificial ferritins.

    PubMed

    Koralewski, M; Kłos, J W; Baranowski, M; Mitróová, Z; Kopčanský, P; Melníková, L; Okuda, M; Schwarzacher, W

    2012-09-07

    Measurements of the Faraday rotation at room temperature over the light wavelength range of 300-680 nm for horse spleen ferritin (HSF), magnetoferritin with different loading factors (LFs) and nanoscale magnetite and Fe(2)O(3) suspensions are reported. The Faraday rotation and the magnetization of the materials studied present similar magnetic field dependences and are characteristic of a superparamagnetic system. The dependence of the Faraday rotation on the magnetic field is described, excluding HSF and Fe(2)O(3), by a Langevin function with a log-normal distribution of the particle size allowing the core diameters of the substances studied to be calculated. It was found that the specific Verdet constant depends linearly on the LF. Differences in the Faraday rotation spectra and their magnetic field dependences allow discrimination between magnetoferritin with maghemite and magnetite cores which can be very useful in biomedicine.

  16. The VMC Survey. XXII. Hierarchical Star Formation in the 30 Doradus-N158-N159-N160 Star-forming Complex

    NASA Astrophysics Data System (ADS)

    Sun, Ning-Chen; de Grijs, Richard; Subramanian, Smitha; Cioni, Maria-Rosa L.; Rubele, Stefano; Bekki, Kenji; Ivanov, Valentin D.; Piatti, Andrés E.; Ripepi, Vincenzo

    2017-02-01

    We study the hierarchical stellar structures in a ˜1.5 deg2 area covering the 30 Doradus-N158-N159-N160 star-forming complex with the VISTA Survey of Magellanic Clouds. Based on the young upper main-sequence stars, we find that the surface densities cover a wide range of values, from log({{Σ }}\\cdot pc2) ≲ -2.0 to log({{Σ }}\\cdot pc2) ≳ 0.0. Their distributions are highly non-uniform, showing groups that frequently have subgroups inside. The sizes of the stellar groups do not exhibit characteristic values, and range continuously from several parsecs to more than 100 pc the cumulative size distribution can be well described by a single power law, with the power-law index indicating a projected fractal dimension D2 = 1.6 ± 0.3. We suggest that the phenomena revealed here support a scenario of hierarchical star formation. Comparisons with other star-forming regions and galaxies are also discussed.

  17. Estimation of Microbial Concentration in Food Products from Qualitative, Microbiological Test Data with the MPN Technique.

    PubMed

    Fujikawa, Hiroshi

    2017-01-01

    Microbial concentration in samples of a food product lot has been generally assumed to follow the log-normal distribution in food sampling, but this distribution cannot accommodate the concentration of zero. In the present study, first, a probabilistic study with the most probable number (MPN) technique was done for a target microbe present at a low (or zero) concentration in food products. Namely, based on the number of target pathogen-positive samples in the total samples of a product found by a qualitative, microbiological examination, the concentration of the pathogen in the product was estimated by means of the MPN technique. The effects of the sample size and the total sample number of a product were then examined. Second, operating characteristic (OC) curves for the concentration of a target microbe in a product lot were generated on the assumption that the concentration of a target microbe could be expressed with the Poisson distribution. OC curves for Salmonella and Cronobacter sakazakii in powdered formulae for infants and young children were successfully generated. The present study suggested that the MPN technique and the Poisson distribution would be useful for qualitative microbiological test data analysis for a target microbe whose concentration in a lot is expected to be low.

  18. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  19. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  20. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  1. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  2. Power laws in citation distributions: evidence from Scopus.

    PubMed

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  3. A Bayesian approach to meta-analysis of plant pathology studies.

    PubMed

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.

  4. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    PubMed

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.

  5. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  6. Transport and solubility of Hetero-disperse dry deposition particulate matter subject to urban source area rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Ying, G.; Sansalone, J.

    2010-03-01

    SummaryWith respect to hydrologic processes, the impervious pavement interface significantly alters relationships between rainfall and runoff. Commensurate with alteration of hydrologic processes the pavement also facilitates transport and solubility of dry deposition particulate matter (PM) in runoff. This study examines dry depositional flux rates, granulometric modification by runoff transport, as well as generation of total dissolved solids (TDS), alkalinity and conductivity in source area runoff resulting from PM solubility. PM is collected from a paved source area transportation corridor (I-10) in Baton Rouge, Louisiana encompassing 17 dry deposition and 8 runoff events. The mass-based granulometric particle size distribution (PSD) is measured and modeled through a cumulative gamma function, while PM surface area distributions across the PSD follow a log-normal distribution. Dry deposition flux rates are modeled as separate first-order exponential functions of previous dry hours (PDH) for PM and suspended, settleable and sediment fractions. When trans-located from dry deposition into runoff, PSDs are modified, with a d50m decreasing from 331 to 14 μm after transport and 60 min of settling. Solubility experiments as a function of pH, contact time and particle size using source area rainfall generate constitutive models to reproduce pH, alkalinity, TDS and alkalinity for historical events. Equilibrium pH, alkalinity and TDS are strongly influenced by particle size and contact times. The constitutive leaching models are combined with measured PSDs from a series of rainfall-runoff events to demonstrate that the model results replicate alkalinity and TDS in runoff from the subject watershed. Results illustrate the granulometry of dry deposition PM, modification of PSDs along the drainage pathway, and the role of PM solubility for generation of TDS, alkalinity and conductivity in urban source area rainfall-runoff.

  7. Grain size distribution of fault rocks: implication from natural gouges and high velocity friction experiments

    NASA Astrophysics Data System (ADS)

    Yang, X.; Chen, J.; Duan, B.

    2011-12-01

    The grain size distribution (GSD) is considered as an important parameter for the characterization of fault rocks. The relative magnitude of energy radiated as seismic waves to fracture energy plays a fundamental role to influence earthquake rupture dynamics. Currently, the details of grain size reduction mechanism and energy-budget are not well known. Here we present GSD measurements on fault rocks (gouge and breccias) in the main slip zone associated with the Wenchuan earthquake happened on 12 May, 2008, and on the gouges produced by high velocity friction (HVF) experiments. High velocity friction experiments were carried out on air dry granitic powder with grain size of 150 - 300 μm at normal stress of 1.0 MPa, a slip rate of 1.0 m / s and slip distances from 10 m to 30 m. On log-log plots of N(r) versus equivalent radius, two distinct linear parts can be discriminated with their intersection at 1 - 2 μm, defined as critical radius rc. One of power-law regime spans about 4 decades from 4 μm to 16 mm and the other covers a range of 0.2 - 2.0 μm. Larger fractal dimension from 2.7 to 3.5 are obtained for larger grain size regime, while lower values ranging from 1.7 to 2.1 for smaller size one. This two-stage distribution means the GSD is not self-similar (scale invariant) and the dominant ways of reducing grain size may be different from one another. XRD data show that the content of quartz drops greatly or disappears at 0.5 - 0.25 μm. GSD of HVF experimental products demonstrates similar feature to natural gouges. For instance, they all show the two-stage GSD with 1 - 2 μm of critical radius rc. The grains with their sizes of less than 1 μm appear rounded edges and equiaxial shapes. A variation in grain shapes can be observed in the grains larger than 5 μm. Some implications could be obtained from the measurements and experiments. (1) rc corresponds to the average value of grinding limit of rock-forming minerals. Further grain size reducing could be attributed to attrition during post-rupture processing such as steady-slip. (2) 90 % minerals with their sizes smaller than 0.5 μm is clays whose origin is neither associated with initially rupturing nor further grain attrition if we consider clay minerals within gouges as the products associated with fluid processes in inter-seismic intervals rather than by seismic slipping. (3) It is the grain that is created by the rupture process during earthquake could be used to calculate fracture energy. On the other hand, the grains forming in attrition during fault slip or / and inter-seismic intervals need to be picked out in order to get reasonable result. As example, if using D = 3.5 over the entire grain size range, the surface fracture energy will be over-estimated more than one order. Hence, surface fracture energy is a very small fraction in the total energy-budget of the earthquake.

  8. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  9. A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt

    NASA Astrophysics Data System (ADS)

    Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.

    2014-12-01

    The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.

  10. LEVELS OF EXTREMELY LOW-FREQUENCY ELECTRIC AND MAGNETIC FIELDS FROM OVERHEAD POWER LINES IN THE OUTDOOR ENVIRONMENT OF RAMALLAH CITY-PALESTINE.

    PubMed

    Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid

    2018-05-01

    In this study, levels of extremely low-frequency electric and magnetic fields originated from overhead power lines were investigated in the outdoor environment in Ramallah city, Palestine. Spot measurements were applied to record fields intensities over 6-min period. The Spectrum Analyzer NF-5035 was used to perform measurements at 1 m above ground level and directly underneath 40 randomly selected power lines distributed fairly within the city. Levels of electric fields varied depending on the line's category (power line, transformer or distributor), a minimum mean electric field of 3.9 V/m was found under a distributor line, and a maximum of 769.4 V/m under a high-voltage power line (66 kV). However, results of electric fields showed a log-normal distribution with the geometric mean and the geometric standard deviation of 35.9 and 2.8 V/m, respectively. Magnetic fields measured at power lines, on contrast, were not log-normally distributed; the minimum and maximum mean magnetic fields under power lines were 0.89 and 3.5 μT, respectively. As a result, none of the measured fields exceeded the ICNIRP's guidelines recommended for general public exposures to extremely low-frequency fields.

  11. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  12. Predicting durations of online collective actions based on Peaks' heights

    NASA Astrophysics Data System (ADS)

    Lu, Peng; Nie, Shizhao; Wang, Zheng; Jing, Ziwei; Yang, Jianwu; Qi, Zhongxiang; Pujia, Wangmo

    2018-02-01

    Capturing the whole process of collective actions, the peak model contains four stages, including Prepare, Outbreak, Peak, and Vanish. Based on the peak model, one of the key variables, factors and parameters are further investigated in this paper, which is the rate between peaks and spans. Although the durations or spans and peaks' heights are highly diversified, it seems that the ratio between them is quite stable. If the rate's regularity is discovered, we can predict how long the collective action lasts and when it ends based on the peak's height. In this work, we combined mathematical simulations and empirical big data of 148 cases to explore the regularity of ratio's distribution. It is indicated by results of simulations that the rate has some regularities of distribution, which is not normal distribution. The big data has been collected from the 148 online collective actions and the whole processes of participation are recorded. The outcomes of empirical big data indicate that the rate seems to be closer to being log-normally distributed. This rule holds true for both the total cases and subgroups of 148 online collective actions. The Q-Q plot is applied to check the normal distribution of the rate's logarithm, and the rate's logarithm does follow the normal distribution.

  13. Measurement of the distribution of ventilation-perfusion ratios in the human lung with proton MRI: comparison with the multiple inert-gas elimination technique.

    PubMed

    Sá, Rui Carlos; Henderson, A Cortney; Simonson, Tatum; Arai, Tatsuya J; Wagner, Harrieth; Theilmann, Rebecca J; Wagner, Peter D; Prisk, G Kim; Hopkins, Susan R

    2017-07-01

    We have developed a novel functional proton magnetic resonance imaging (MRI) technique to measure regional ventilation-perfusion (V̇ A /Q̇) ratio in the lung. We conducted a comparison study of this technique in healthy subjects ( n = 7, age = 42 ± 16 yr, Forced expiratory volume in 1 s = 94% predicted), by comparing data measured using MRI to that obtained from the multiple inert gas elimination technique (MIGET). Regional ventilation measured in a sagittal lung slice using Specific Ventilation Imaging was combined with proton density measured using a fast gradient-echo sequence to calculate regional alveolar ventilation, registered with perfusion images acquired using arterial spin labeling, and divided on a voxel-by-voxel basis to obtain regional V̇ A /Q̇ ratio. LogSDV̇ and LogSDQ̇, measures of heterogeneity derived from the standard deviation (log scale) of the ventilation and perfusion vs. V̇ A /Q̇ ratio histograms respectively, were calculated. On a separate day, subjects underwent study with MIGET and LogSDV̇ and LogSDQ̇ were calculated from MIGET data using the 50-compartment model. MIGET LogSDV̇ and LogSDQ̇ were normal in all subjects. LogSDQ̇ was highly correlated between MRI and MIGET (R = 0.89, P = 0.007); the intercept was not significantly different from zero (-0.062, P = 0.65) and the slope did not significantly differ from identity (1.29, P = 0.34). MIGET and MRI measures of LogSDV̇ were well correlated (R = 0.83, P = 0.02); the intercept differed from zero (0.20, P = 0.04) and the slope deviated from the line of identity (0.52, P = 0.01). We conclude that in normal subjects, there is a reasonable agreement between MIGET measures of heterogeneity and those from proton MRI measured in a single slice of lung. NEW & NOTEWORTHY We report a comparison of a new proton MRI technique to measure regional V̇ A /Q̇ ratio against the multiple inert gas elimination technique (MIGET). The study reports good relationships between measures of heterogeneity derived from MIGET and those derived from MRI. Although currently limited to a single slice acquisition, these data suggest that single sagittal slice measures of V̇ A /Q̇ ratio provide an adequate means to assess heterogeneity in the normal lung. Copyright © 2017 the American Physiological Society.

  14. A model for the spatial distribution of snow water equivalent parameterized from the spatial variability of precipitation

    NASA Astrophysics Data System (ADS)

    Skaugen, Thomas; Weltzien, Ingunn H.

    2016-09-01

    Snow is an important and complicated element in hydrological modelling. The traditional catchment hydrological model with its many free calibration parameters, also in snow sub-models, is not a well-suited tool for predicting conditions for which it has not been calibrated. Such conditions include prediction in ungauged basins and assessing hydrological effects of climate change. In this study, a new model for the spatial distribution of snow water equivalent (SWE), parameterized solely from observed spatial variability of precipitation, is compared with the current snow distribution model used in the operational flood forecasting models in Norway. The former model uses a dynamic gamma distribution and is called Snow Distribution_Gamma, (SD_G), whereas the latter model has a fixed, calibrated coefficient of variation, which parameterizes a log-normal model for snow distribution and is called Snow Distribution_Log-Normal (SD_LN). The two models are implemented in the parameter parsimonious rainfall-runoff model Distance Distribution Dynamics (DDD), and their capability for predicting runoff, SWE and snow-covered area (SCA) is tested and compared for 71 Norwegian catchments. The calibration period is 1985-2000 and validation period is 2000-2014. Results show that SDG better simulates SCA when compared with MODIS satellite-derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" and giving spurious positive trends in SWE, typical for SD_LN, is prevented. The precision of runoff simulations using SDG is slightly inferior, with a reduction in Nash-Sutcliffe and Kling-Gupta efficiency criterion of 0.01, but it is shown that the high precision in runoff prediction using SD_LN is accompanied with erroneous simulations of SWE.

  15. Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI

    NASA Astrophysics Data System (ADS)

    Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin

    2013-04-01

    Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.

  16. Analysis of various factors affecting pupil size in patients with glaucoma.

    PubMed

    Park, Ji Woong; Kang, Bong Hui; Kwon, Ji Won; Cho, Kyong Jin

    2017-09-16

    Pupil size is an important factor in predicting post-operative satisfaction. We assessed the correlation between pupil size, measured by Humphrey static perimetry, and various affecting factors in patients with glaucoma. In total, 825 eyes of 415 patients were evaluated retrospectively. Pupil size was measured with Humphrey static perimetry. Comparisons of pupil size according to the presence of glaucoma were evaluated, as were correlations between pupil size and various factors, including age, logMAR best corrected visual acuity (BCVA), retinal nerve fiber layer (RNFL) thickness, spherical equivalent, intraocular pressure, axial length, central corneal thickness, white-to-white, and the kappa angle. Pupil size was significantly smaller in glaucoma patients than in glaucoma suspects (p < 0.001) or the normal group (p < 0.001). Pupil size decreased significantly as age (p < 0.001) and central cornea thickness (p = 0.007) increased, and increased significantly as logMAR BCVA (p = 0.02) became worse and spherical equivalent (p = 0.007) and RNFL thickness (p = 0.042) increased. In patients older than 50 years, pupil size was significantly larger in eyes with a history of cataract surgery. Humphrey static perimetry can be useful in measuring pupil size. Pupil size was significantly smaller in eyes with glaucoma. Other factors affecting pupil size can be used in a preoperative evaluation when considering cataract surgery or laser refractive surgery.

  17. Determination of Irreducible Water Saturation from nuclear magnetic resonance based on fractal theory — a case study of sandstone with complex pore structure

    NASA Astrophysics Data System (ADS)

    Peng, L.; Pan, H.; Ma, H.; Zhao, P.; Qin, R.; Deng, C.

    2017-12-01

    The irreducible water saturation (Swir) is a vital parameter for permeability prediction and original oil and gas estimation. However, the complex pore structure of the rocks makes the parameter difficult to be calculated from both laboratory and conventional well logging methods. In this study, an effective statistical method to predict Swir is derived directly from nuclear magnetic resonance (NMR) data based on fractal theory. The spectrum of transversal relaxation time (T2) is normally considered as an indicator of pore size distribution, and the micro- and meso-pore's fractal dimension in two specific range of T2 spectrum distribution are calculated. Based on the analysis of the fractal characteristics of 22 core samples, which were drilled from four boreholes of tight lithologic oil reservoirs of Ordos Basin in China, the positive correlation between Swir and porosity is derived. Afterwards a predicting model for Swir based on linear regressions of fractal dimensions is proposed. It reveals that the Swir is controlled by the pore size and the roughness of the pore. The reliability of this model is tested and an ideal consistency between predicted results and experimental data is found. This model is a reliable supplementary to predict the irreducible water saturation in the case that T2 cutoff value cannot be accurately determined.

  18. Proton Straggling in Thick Silicon Detectors

    NASA Technical Reports Server (NTRS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.

    2017-01-01

    Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.

  19. Simulating Bubble Plumes from Breaking Waves with a Forced-Air Venturi

    NASA Astrophysics Data System (ADS)

    Long, M. S.; Keene, W. C.; Maben, J. R.; Chang, R. Y. W.; Duplessis, P.; Kieber, D. J.; Beaupre, S. R.; Frossard, A. A.; Kinsey, J. D.; Zhu, Y.; Lu, X.; Bisgrove, J.

    2017-12-01

    It has been hypothesized that the size distribution of bubbles in subsurface seawater is a major factor that modulates the corresponding size distribution of primary marine aerosol (PMA) generated when those bubbles burst at the air-water interface. A primary physical control of the bubble size distribution produced by wave breaking is the associated turbulence that disintegrates larger bubbles into smaller ones. This leads to two characteristic features of bubble size distributions: (1) the Hinze scale which reflects a bubble size above which disintegration is possible based on turbulence intensity and (2) the slopes of log-linear regressions of the size distribution on either side of the Hinze scale that indicate the state of plume evolution or age. A Venturi with tunable seawater and forced air flow rates was designed and deployed in an artificial PMA generator to produce bubble plumes representative of breaking waves. This approach provides direct control of turbulence intensity and, thus, the resulting bubble size distribution characterizable by observations of the Hinze scale and the simulated plume age over a range of known air detrainment rates. Evaluation of performance in different seawater types over the western North Atlantic demonstrated that the Venturi produced bubble plumes with parameter values that bracket the range of those observed in laboratory and field experiments. Specifically, the seawater flow rate modulated the value of the Hinze scale while the forced-air flow rate modulated the plume age parameters. Results indicate that the size distribution of sub-surface bubbles within the generator did not significantly modulate the corresponding number size distribution of PMA produced via bubble bursting.

  20. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  1. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws

    USGS Publications Warehouse

    Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.

    2011-01-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.

  2. Observations of urban airborne particle number concentrations during rush-hour conditions: analysis of the number based size distributions and modal parameters.

    PubMed

    Lingard, Justin J N; Agus, Emily L; Young, David T; Andrews, Gordon E; Tomlin, Alison S

    2006-12-01

    A summertime study of the number concentration and the size distribution of combustion derived nanometre sized particles (termed nanoparticles) from diesel and spark-ignition (SI) engine emissions were made under rush-hour and free-flow traffic conditions at an urban roadside location in Leeds, UK in July 2003. The measured total particle number concentrations (N(TOTAL)) were of the order 1.8 x 10(4) to 3.4 x 10(4) cm(-3), and tended to follow the diurnal traffic flow patterns. The N(TOTAL) was dominated by particles < or =100 nm in diameter which accounted for between 89-93% of the measured particle number. By use of a log-normal fitting procedure, the modal parameters of the number based particle size distribution of urban airborne particulates were derived from the roadside measurements. Four component modes were identified. Two nucleation modes were found, with a smaller, more minor, mode composed principally of sub-11 nm particles, believed to be derived from particles formed from the nucleation of gaseous species in the atmosphere. A second mode, much larger in terms of number, was composed of particles within the size range of 10-20 nm. This second mode was believed to be principally derived from the condensation of the unburned fuel and lube oil (the solvent organic fraction or SOF) as it cooled on leaving the engine exhaust. Third and fourth modes were noted within the size ranges of 28-65 nm and 100-160 nm, respectively. The third mode was believed to be representative of internally mixed Aitken mode particles composed of a soot/ash core with an adsorbed layer of readily volatilisable material. The fourth mode was believed to be composed of chemically aged, secondary particles. The larger nucleation and Aitken modes accounted for between 80-90% of the measured N(TOTAL), and the particles in these modes were believed to be derived from SI and diesel engine emissions. The overall size distribution, particularly in modes II-IV, was observed to be strongly related to the number of primary particle emissions, with larger count median diameters observed under conditions where low numbers of primary soot based particles were present.

  3. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  4. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  5. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  6. Particle Morphology and Size Results from the Smoke Aerosol Measurement Experiment-2

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Greenberg, Paul S.; Fischer, David; Meyer, Marit; Mulholland, George; Yuan, Zeng-Guang; Bryg, Victoria; Cleary, Thomas; Yang, Jiann

    2012-01-01

    Results are presented from the Reflight of the Smoke Aerosol Measurement Experiment (SAME-2) which was conducted during Expedition 24 (July-September 2010). The reflight experiment built upon the results of the original flight during Expedition 15 by adding diagnostic measurements and expanding the test matrix. Five different materials representative of those found in spacecraft (Teflon, Kapton, cotton, silicone rubber and Pyrell) were heated to temperatures below the ignition point with conditions controlled to provide repeatable sample surface temperatures and air flow. The air flow past the sample during the heating period ranged from quiescent to 8 cm/s. The smoke was initially collected in an aging chamber to simulate the transport time from the smoke source to the detector. This effective transport time was varied by holding the smoke in the aging chamber for times ranging from 11 to 1800 s. Smoke particle samples were collected on Transmission Electron Microscope (TEM) grids for post-flight analysis. The TEM grids were analyzed to observe the particle morphology and size parameters. The diagnostics included a prototype two-moment smoke detector and three different measures of moments of the particle size distribution. These moment diagnostics were used to determine the particle number concentration (zeroth moment), the diameter concentration (first moment), and the mass concentration (third moment). These statistics were combined to determine the diameter of average mass and the count mean diameter and, by assuming a log-normal distribution, the geometric mean diameter and the geometric standard deviations can also be calculated. Overall the majority of the average smoke particle sizes were found to be in the 200 nm to 400 nm range with the quiescent cases producing some cases with substantially larger particles.

  7. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  8. Choriocapillaris Flow Features Follow a Power Law Distribution: Implications for Characterization and Mechanisms of Disease Progression.

    PubMed

    Spaide, Richard F

    2016-10-01

    To investigate flow characteristics of the choriocapillaris using optical coherence tomography angiography. Retrospective observational case series. Visualization of flow in individual choriocapillary vessels is below the current resolution limit of optical coherence tomography angiography instruments, but areas of absent flow signal, called flow voids, are resolvable. The central macula was imaged with the Optovue RTVue XR Avanti using a 10-μm slab thickness in 104 eyes of 80 patients who ranged in age from 24 to 99 years of age. Automatic local thresholding of the resultant raw data with the Phansalkar method was analyzed with generalized estimating equations. The distribution of flow voids vs size of the voids was highly skewed. The data showed a linear log-log plot and goodness-of-fit methods showed the data followed a power law distribution over the relevant range. A slope intercept relationship was also evaluated for the log transform and significant predictors for variables included age, hypertension, pseudodrusen, and the presence of late age-related macular degeneration (AMD) in the fellow eye. The pattern of flow voids forms a scale invariant pattern in the choriocapillaris starting at a size much smaller than a choroidal lobule. Age and hypertension affect the choriocapillaris, a flat layer of capillaries that may serve as an observable surrogate for the neural or systemic microvasculature. Significant alterations detectable in the flow pattern in eyes with pseudodrusen and in eyes with late AMD in the fellow eye offer diagnostic possibilities and impact theories of disease pathogenesis. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. RESIDENTIAL EXPOSURE TO EXTREMELY LOW FREQUENCY ELECTRIC AND MAGNETIC FIELDS IN THE CITY OF RAMALLAH-PALESTINE.

    PubMed

    Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid

    2018-04-01

    This study was focused on the measurement of residential exposure to power frequency (50-Hz) electric and magnetic fields in the city of Ramallah-Palestine. A group of 32 semi-randomly selected residences distributed amongst the city were under investigations of fields variations. Measurements were performed with the Spectrum Analyzer NF-5035 and were carried out at one meter above ground level in the residence's bedroom or living room under both zero and normal-power conditions. Fields' variations were recorded over 6-min and some times over few hours. Electric fields under normal-power use were relatively low; ~59% of residences experienced mean electric fields <10 V/m. The highest mean electric field of 66.9 V/m was found at residence R27. However, electric field values were log-normally distributed with geometric mean and geometric standard deviation of 9.6 and 3.5 V/m, respectively. Background electric fields measured under zero-power use, were very low; ~80% of residences experienced background electric fields <1 V/m. Under normal-power use, the highest mean magnetic field (0.45 μT) was found at residence R26 where an indoor power substation exists. However, ~81% of residences experienced mean magnetic fields <0.1 μT. Magnetic fields measured inside the 32 residences showed also a log-normal distribution with geometric mean and geometric standard deviation of 0.04 and 3.14 μT, respectively. Under zero-power conditions, ~7% of residences experienced average background magnetic field >0.1 μT. Fields from appliances showed a maximum mean electric field of 67.4 V/m from hair dryer, and maximum mean magnetic field of 13.7 μT from microwave oven. However, no single result surpassed the ICNIRP limits for general public exposures to ELF fields, but still, the interval 0.3-0.4 μT for possible non-thermal health impacts of exposure to ELF magnetic fields, was experienced in 13% of the residences.

  10. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  11. Transformation techniques for cross-sectional and longitudinal endocrine data: application to salivary cortisol concentrations.

    PubMed

    Miller, Robert; Plessow, Franziska

    2013-06-01

    Endocrine time series often lack normality and homoscedasticity most likely due to the non-linear dynamics of their natural determinants and the immanent characteristics of the biochemical analysis tools, respectively. As a consequence, data transformation (e.g., log-transformation) is frequently applied to enable general linear model-based analyses. However, to date, data transformation techniques substantially vary across studies and the question of which is the optimum power transformation remains to be addressed. The present report aims to provide a common solution for the analysis of endocrine time series by systematically comparing different power transformations with regard to their impact on data normality and homoscedasticity. For this, a variety of power transformations of the Box-Cox family were applied to salivary cortisol data of 309 healthy participants sampled in temporal proximity to a psychosocial stressor (the Trier Social Stress Test). Whereas our analyses show that un- as well as log-transformed data are inferior in terms of meeting normality and homoscedasticity, they also provide optimum transformations for both, cross-sectional cortisol samples reflecting the distributional concentration equilibrium and longitudinal cortisol time series comprising systematically altered hormone distributions that result from simultaneously elicited pulsatile change and continuous elimination processes. Considering these dynamics of endocrine oscillations, data transformation prior to testing GLMs seems mandatory to minimize biased results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Distribution of Total Depressive Symptoms Scores and Each Depressive Symptom Item in a Sample of Japanese Employees.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Yamada, Hiroshi; Miyake, Hirotsugu; Furukawa, Toshiaki A; Furukaw, Toshiaki A

    2016-01-01

    In a previous study, we reported that the distribution of total depressive symptoms scores according to the Center for Epidemiologic Studies Depression Scale (CES-D) in a general population is stable throughout middle adulthood and follows an exponential pattern except for at the lowest end of the symptom score. Furthermore, the individual distributions of 16 negative symptom items of the CES-D exhibit a common mathematical pattern. To confirm the reproducibility of these findings, we investigated the distribution of total depressive symptoms scores and 16 negative symptom items in a sample of Japanese employees. We analyzed 7624 employees aged 20-59 years who had participated in the Northern Japan Occupational Health Promotion Centers Collaboration Study for Mental Health. Depressive symptoms were assessed using the CES-D. The CES-D contains 20 items, each of which is scored in four grades: "rarely," "some," "much," and "most of the time." The descriptive statistics and frequency curves of the distributions were then compared according to age group. The distribution of total depressive symptoms scores appeared to be stable from 30-59 years. The right tail of the distribution for ages 30-59 years exhibited a linear pattern with a log-normal scale. The distributions of the 16 individual negative symptom items of the CES-D exhibited a common mathematical pattern which displayed different distributions with a boundary at "some." The distributions of the 16 negative symptom items from "some" to "most" followed a linear pattern with a log-normal scale. The distributions of the total depressive symptoms scores and individual negative symptom items in a Japanese occupational setting show the same patterns as those observed in a general population. These results show that the specific mathematical patterns of the distributions of total depressive symptoms scores and individual negative symptom items can be reproduced in an occupational population.

  13. A study on raindrop size distribution variability in before and after landfall precipitations of tropical cyclones observed over southern India

    NASA Astrophysics Data System (ADS)

    Janapati, Jayalakshmi; seela, Balaji Kumar; Reddy M., Venkatrami; Reddy K., Krishna; Lin, Pay-Liam; Rao T., Narayana; Liu, Chian-Yi

    2017-06-01

    Raindrop size distribution (RSD) characteristics in before landfall (BLF) and after landfall (ALF) of three tropical cyclones (JAL, THANE, and NILAM) induced precipitations are investigated by using a laser-based (PARticleSIze and VELocity - PARSIVEL) disdrometer at two different locations [Kadapa (14.47°N, 78.82°E) and Gadanki (13.5°N, 79.2°E)] in semi-arid region of southern India. In both BLF and ALF precipitations of these three cyclones, convective precipitations have higher mass weighted mean diameter (Dm) and lower normalized intercept parameter (log10Nw) values than stratiform precipitations. The radar reflectivity (Z) and rain rate (R) relations (Z=A*Rb) showed distinct variations in BLF and ALF precipitations of three cyclones. BLF precipitation of JAL cyclone has a higher Dm than ALF precipitation. Whereas, for THANE and NILAM cyclones ALF precipitations have higher Dm than BLF. The Dm values of three cyclones (both in BLF and ALF) are smaller than the Dm values of the other (Atlantic and Pacific) oceanic cyclones. Interaction of different regions (eyewall, inner rainbands, and outer rainbands) of cyclones with the environment and underlying surface led to RSD variations between BLF and ALF precipitations through different microphysical (collision-coalescence, breakup, evaporation, and riming) processes. The immediate significance of the present work is that (i) it contributes to our understanding of cyclone RSD in BLF and ALF precipitations, and (ii) it provides the useful information for quantitative estimation of rainfall from Doppler weather radar observations.

  14. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  15. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  16. Evaluation and validity of a LORETA normative EEG database.

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-04-01

    To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.

  17. Management of Listeria monocytogenes in fermented sausages using the Food Safety Objective concept underpinned by stochastic modeling and meta-analysis.

    PubMed

    Mataragas, M; Alessandria, V; Rantsiou, K; Cocolin, L

    2015-08-01

    In the present work, a demonstration is made on how the risk from the presence of Listeria monocytogenes in fermented sausages can be managed using the concept of Food Safety Objective (FSO) aided by stochastic modeling (Bayesian analysis and Monte Carlo simulation) and meta-analysis. For this purpose, the ICMSF equation was used, which combines the initial level (H0) of the hazard and its subsequent reduction (ΣR) and/or increase (ΣI) along the production chain. Each element of the equation was described by a distribution to investigate the effect not only of the level of the hazard, but also the effect of the accompanying variability. The distribution of each element was determined by Bayesian modeling (H0) and meta-analysis (ΣR and ΣI). The output was a normal distribution N(-5.36, 2.56) (log cfu/g) from which the percentage of the non-conforming products, i.e. the fraction above the FSO of 2 log cfu/g, was estimated at 0.202%. Different control measures were examined such as lowering initial L. monocytogenes level and inclusion of an additional killing step along the process resulting in reduction of the non-conforming products from 0.195% to 0.003% based on the mean and/or square-root change of the normal distribution, and 0.001%, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Mesoscale properties of clay aggregates from potential of mean force representation of interactions between nanoplatelets

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Davoud; Whittle, Andrew J.; Pellenq, Roland J.-M.

    2014-04-01

    Face-to-face and edge-to-edge free energy interactions of Wyoming Na-montmorillonite platelets were studied by calculating potential of mean force along their center to center reaction coordinate using explicit solvent (i.e., water) molecular dynamics and free energy perturbation methods. Using a series of configurations, the Gay-Berne potential was parametrized and used to examine the meso-scale aggregation and properties of platelets that are initially random oriented under isothermal-isobaric conditions. Aggregates of clay were defined by geometrical analysis of face-to-face proximity of platelets with size distribution described by a log-normal function. The isotropy of the microstructure was assessed by computing a scalar order parameter. The number of platelets per aggregate and anisotropy of the microstructure both increases with platelet plan area. The system becomes more ordered and aggregate size increases with increasing pressure until maximum ordered state at confining pressure of 50 atm. Further increase of pressure slides platelets relative to each other leading to smaller aggregate size. The results show aggregate size of (3-8) platelets for sodium-smectite in agreement with experiments (3-10). The geometrical arrangement of aggregates affects mechanical properties of the system. The elastic properties of the meso-scale aggregate assembly are reported and compared with nanoindentation experiments. It is found that the elastic properties at this scale are close to the cubic systems. The elastic stiffness and anisotropy of the assembly increases with the size of the platelets and the level of external pressure.

  19. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  20. Probabilistic measures of persistence and extinction in measles (meta)populations.

    PubMed

    Gunning, Christian E; Wearing, Helen J

    2013-08-01

    Persistence and extinction are fundamental processes in ecological systems that are difficult to accurately measure due to stochasticity and incomplete observation. Moreover, these processes operate on multiple scales, from individual populations to metapopulations. Here, we examine an extensive new data set of measles case reports and associated demographics in pre-vaccine era US cities, alongside a classic England & Wales data set. We first infer the per-population quasi-continuous distribution of log incidence. We then use stochastic, spatially implicit metapopulation models to explore the frequency of rescue events and apparent extinctions. We show that, unlike critical community size, the inferred distributions account for observational processes, allowing direct comparisons between metapopulations. The inferred distributions scale with population size. We use these scalings to estimate extinction boundary probabilities. We compare these predictions with measurements in individual populations and random aggregates of populations, highlighting the importance of medium-sized populations in metapopulation persistence. © 2013 John Wiley & Sons Ltd/CNRS.

  1. Comparative analysis of background EEG activity in childhood absence epilepsy during valproate treatment: a standardized, low-resolution, brain electromagnetic tomography (sLORETA) study.

    PubMed

    Shin, Jung-Hyun; Eom, Tae-Hoon; Kim, Young-Hoon; Chung, Seung-Yun; Lee, In-Goo; Kim, Jung-Min

    2017-07-01

    Valproate (VPA) is an antiepileptic drug (AED) used for initial monotherapy in treating childhood absence epilepsy (CAE). EEG might be an alternative approach to explore the effects of AEDs on the central nervous system. We performed a comparative analysis of background EEG activity during VPA treatment by using standardized, low-resolution, brain electromagnetic tomography (sLORETA) to explore the effect of VPA in patients with CAE. In 17 children with CAE, non-parametric statistical analyses using sLORETA were performed to compare the current density distribution of four frequency bands (delta, theta, alpha, and beta) between the untreated and treated condition. Maximum differences in current density were found in the left inferior frontal gyrus for the delta frequency band (log-F-ratio = -1.390, P > 0.05), the left medial frontal gyrus for the theta frequency band (log-F-ratio = -0.940, P > 0.05), the left inferior frontal gyrus for the alpha frequency band (log-F-ratio = -0.590, P > 0.05), and the left anterior cingulate for the beta frequency band (log-F-ratio = -1.318, P > 0.05). However, none of these differences were significant (threshold log-F-ratio = ±1.888, P < 0.01; threshold log-F-ratio = ±1.722, P < 0.05). Because EEG background is accepted as normal in CAE, VPA would not be expected to significantly change abnormal thalamocortical oscillations on a normal EEG background. Therefore, our results agree with currently accepted concepts but are not consistent with findings in some previous studies.

  2. Bayesian methods for uncertainty factor application for derivation of reference values.

    PubMed

    Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B

    2016-10-01

    In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Role of noise and agents’ convictions on opinion spreading in a three-state voter-like model

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno

    2013-07-01

    In this work we study opinion formation in a voter-like model defined on a square lattice of linear size L. The agents may be in three different states, representing any public debate with three choices (yes, no, undecided). We consider heterogeneous agents that have different convictions about their opinions. These convictions limit the capacity of persuasion of the individuals during the interactions. Moreover, there is a noise p that represents the probability of an individual spontaneously changing his opinion to the undecided state. Our simulations suggest that the system reaches stationary states for all values of p, with consensus states occurring only for the noiseless case p = 0. In this case, the relaxation times are distributed according to a log-normal function, with the average value τ growing with the lattice size as τ ∼ Lα, where α ≈ 0.9. We found a threshold value p* ≈ 0.9 above which the stationary fraction of undecided agents is greater than the fraction of decided ones. We also study the consequences of the presence of external effects in the system, which models the influence of mass media on opinion formation.

  4. Plasmonic behaviour of sputtered Au nanoisland arrays

    NASA Astrophysics Data System (ADS)

    Tvarožek, V.; Szabó, O.; Novotný, I.; Kováčová, S.; Škriniarová, J.; Šutta, P.

    2017-02-01

    The specificity of the formation of Au sputtered nanoisland arrays (NIA) on a glass substrate or on a ZnO thin film doped by Ga is demonstrated. Statistical analysis of morphology images (SEM, AFM) exhibited the Log-normal distribution of the size (area) of nanoislands-their modus AM varied from 8 to 328 nm2 depending on the sputtering power density, which determined the nominal thicknesses in the range of 2-8 nm. Preferential polycrystalline texture (111) of Au NIA increased with the power density and after annealing. Transverse localised surface plasmonic resonance (LSPR; evaluated by transmission UV-vis spectroscopy) showed the red shift of the extinction peaks (Δl ≤ 100 nm) with an increase of the nominal thickness, and the blue shift (Δλ ≤ -65 nm) after annealing of Au NIA. The plasmonic behaviour of Au NIA was described by modification of a size-scaling universal model using the nominal thin film thickness as a technological scaling parameter. Sputtering of a Ti intermediate adhesive ultrathin film between the glass substrate and gold improves the adhesion of Au nanoislands as well as supporting the formation of more defined Au NIA structures of smaller dimensions.

  5. Both size-frequency distribution and sub-populations of the main-belt asteroid population are consistent with YORP-induced rotational fission

    NASA Astrophysics Data System (ADS)

    Jacobson, S.; Scheeres, D.; Rossi, A.; Marzari, F.; Davis, D.

    2014-07-01

    From the results of a comprehensive asteroid-population-evolution model, we conclude that the YORP-induced rotational-fission hypothesis has strong repercussions for the small size end of the main-belt asteroid size-frequency distribution and is consistent with observed asteroid-population statistics and with the observed sub-populations of binary asteroids, asteroid pairs and contact binaries. The foundation of this model is the asteroid-rotation model of Marzari et al. (2011) and Rossi et al. (2009), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis (i.e. when the rotation rate exceeds a critical value, erosion and binary formation occur; Scheeres 2007) and binary-asteroid evolution (Jacobson & Scheeres, 2011). The YORP-effect timescale for large asteroids with diameters D > ˜ 6 km is longer than the collision timescale in the main belt, thus the frequency of large asteroids is determined by a collisional equilibrium (e.g. Bottke 2005), but for small asteroids with diameters D < ˜ 6 km, the asteroid-population evolution model confirms that YORP-induced rotational fission destroys small asteroids more frequently than collisions. Therefore, the frequency of these small asteroids is determined by an equilibrium between the creation of new asteroids out of the impact debris of larger asteroids and the destruction of these asteroids by YORP-induced rotational fission. By introducing a new source of destruction that varies strongly with size, YORP-induced rotational fission alters the slope of the size-frequency distribution. Using the outputs of the asteroid-population evolution model and a 1-D collision evolution model, we can generate this new size-frequency distribution and it matches the change in slope observed by the SKADS survey (Gladman 2009). This agreement is achieved with both an accretional power-law or a truncated ''Asteroids were Born Big'' size-frequency distribution (Weidenschilling 2010, Morbidelli 2009). The binary-asteroid evolution model is highly constrained by the modeling done in Jacobson & Scheeres, and therefore the asteroid-population evolution model has only two significant free parameters: the ratio of low-to-high-mass-ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. Using this model, we successfully reproduce the observed small-asteroid sub-populations, which orthogonally constrain the two free parameters. We find the outcome of rotational fission most likely produces an initial mass-ratio fraction that is four to eight times as likely to produce high-mass-ratio systems as low-mass-ratio systems, which is consistent with rotational fission creating binary systems in a flat distribution with respect to mass ratio. We also find that the mean of the log-normal BYORP coefficient distribution B ≈ 10^{-2}.

  6. Distribution of normal superficial ocular vessels in digital images.

    PubMed

    Banaee, Touka; Ehsaei, Asieh; Pourreza, Hamidreza; Khajedaluee, Mohammad; Abrishami, Mojtaba; Basiri, Mohsen; Daneshvar Kakhki, Ramin; Pourreza, Reza

    2014-02-01

    To investigate the distribution of different-sized vessels in the digital images of the ocular surface, an endeavor which may provide useful information for future studies. This study included 295 healthy individuals. From each participant, four digital photographs of the superior and inferior conjunctivae of both eyes, with a fixed succession of photography (right upper, right lower, left upper, left lower), were taken with a slit lamp mounted camera. Photographs were then analyzed by a previously described algorithm for vessel detection in the digital images. The area (of the image) occupied by vessels (AOV) of different sizes was measured. Height, weight, fasting blood sugar (FBS) and hemoglobin levels were also measured and the relationship between these parameters and the AOV was investigated. These findings indicated a statistically significant difference in the distribution of the AOV among the four conjunctival areas. No significant correlations were noted between the AOV of each conjunctival area and the different demographic and biometric factors. Medium-sized vessels were the most abundant vessels in the photographs of the four investigated conjunctival areas. The AOV of the different sizes of vessels follows a normal distribution curve in the four areas of the conjunctiva. The distribution of the vessels in successive photographs changes in a specific manner, with the mean AOV becoming larger as the photos were taken from the right upper to the left lower area. The AOV of vessel sizes has a normal distribution curve and medium-sized vessels occupy the largest area of the photograph. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  7. Bayesian methods to determine performance differences and to quantify variability among centers in multi-center trials: the IHAST trial.

    PubMed

    Bayman, Emine O; Chaloner, Kathryn M; Hindman, Bradley J; Todd, Michael M

    2013-01-16

    To quantify the variability among centers and to identify centers whose performance are potentially outside of normal variability in the primary outcome and to propose a guideline that they are outliers. Novel statistical methodology using a Bayesian hierarchical model is used. Bayesian methods for estimation and outlier detection are applied assuming an additive random center effect on the log odds of response: centers are similar but different (exchangeable). The Intraoperative Hypothermia for Aneurysm Surgery Trial (IHAST) is used as an example. Analyses were adjusted for treatment, age, gender, aneurysm location, World Federation of Neurological Surgeons scale, Fisher score and baseline NIH stroke scale scores. Adjustments for differences in center characteristics were also examined. Graphical and numerical summaries of the between-center standard deviation (sd) and variability, as well as the identification of potential outliers are implemented. In the IHAST, the center-to-center variation in the log odds of favorable outcome at each center is consistent with a normal distribution with posterior sd of 0.538 (95% credible interval: 0.397 to 0.726) after adjusting for the effects of important covariates. Outcome differences among centers show no outlying centers. Four potential outlying centers were identified but did not meet the proposed guideline for declaring them as outlying. Center characteristics (number of subjects enrolled from the center, geographical location, learning over time, nitrous oxide, and temporary clipping use) did not predict outcome, but subject and disease characteristics did. Bayesian hierarchical methods allow for determination of whether outcomes from a specific center differ from others and whether specific clinical practices predict outcome, even when some centers/subgroups have relatively small sample sizes. In the IHAST no outlying centers were found. The estimated variability between centers was moderately large.

  8. Thorium normalization as a hydrocarbon accumulation indicator for Lower Miocene rocks in Ras Ghara area, Gulf of Suez, Egypt

    NASA Astrophysics Data System (ADS)

    El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.

    2018-06-01

    An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.

  9. X-ray microanalysis of porous materials using Monte Carlo simulations.

    PubMed

    Poirier, Dominique; Gauvin, Raynald

    2011-01-01

    Quantitative X-ray microanalysis models, such as ZAF or φ(ρz) methods, are normally based on solid, flat-polished specimens. This limits their use in various domains where porous materials are studied, such as powder metallurgy, catalysts, foams, etc. Previous experimental studies have shown that an increase in porosity leads to a deficit in X-ray emission for various materials, such as graphite, Cr(2) O(3) , CuO, ZnS (Ichinokawa et al., '69), Al(2) O(3) , and Ag (Lakis et al., '92). However, the mechanisms responsible for this decrease are unclear. The porosity by itself does not explain the loss in intensity, other mechanisms have therefore been proposed, such as extra energy loss by the diffusion of electrons by surface plasmons generated at the pores-solid interfaces, surface roughness, extra charging at the pores-solid interface, or carbon diffusion in the pores. However, the exact mechanism is still unclear. In order to better understand the effects of porosity on quantitative microanalysis, a new approach using Monte Carlo simulations was developed by Gauvin (2005) using a constant pore size. In this new study, the X-ray emissions model was modified to include a random log normal distribution of pores size in the simulated materials. This article presents, after a literature review of the previous works performed about X-ray microanalysis of porous materials, some of the results obtained with Gauvin's modified model. They are then compared with experimental results. Copyright © 2011 Wiley Periodicals, Inc.

  10. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis.

    PubMed

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-12-13

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.

  11. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis

    PubMed Central

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-01-01

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387

  12. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  13. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  14. Analytical Model for Mars Crater-Size Frequency Distribution

    NASA Astrophysics Data System (ADS)

    Bruckman, W.; Ruiz, A.; Ramos, E.

    2009-05-01

    We present a theoretical and analytical curve that reproduces essential features of the frequency distributions vs. diameter of the 42,000 impact craters contained in Barlow's Mars Catalog. The model is derived using reasonable simple assumptions that allow us to relate the present craters population with the craters population at each particular epoch. The model takes into consideration the reduction of the number of craters as a function of time caused by their erosion and obliteration, and this provides a simple and natural explanation for the presence of different slopes in the empirical log-log plot of number of craters (N) vs. diameter (D). A mean life for martians craters as a function of diameter is deduced, and it is shown that this result is consistent with the corresponding determination of craters mean life based on Earth data. Arguments are given to suggest that this consistency follows from the fact that a crater mean life is proportional to its volumen. It also follows that in the absence of erosions and obliterations, when craters are preserved, we would have N ∝ 1/D^{4.3}, which is a striking conclusion, since the exponent 4.3 is larger than previously thought. Such an exponent implies a similar slope in the extrapolated impactors size-frequency distribution.

  15. Interpreting Gas Production Decline Curves By Combining Geometry and Topology

    NASA Astrophysics Data System (ADS)

    Ewing, R. P.; Hu, Q.

    2014-12-01

    Shale gas production forms an increasing fraction of domestic US energy supplies, but individual gas production wells show steep production declines. Better understanding of this production decline would allow better economic forecasting; better understanding of the reasons behind the decline would allow better production management. Yet despite these incentives, production declines curves remain poorly understood, and current analyses range from Arps' purely empirical equation to new sophisticated approaches requiring multiple unavailable parameters. Models often fail to capture salient features: for example, in log-log space many wells decline with an exponent markedly different from the -0.5 expected from diffusion, and often show a transition from one decline mode to another. We propose a new approach based on the assumption that the rate-limiting step is gas movement from the matrix to the induced fracture network. The matrix is represented as an assemblage of equivalent spheres (geometry), with low matrix pore connectivity (topology) that results in a distance-dependent accessible porosity profile given by percolation theory. The basic theory has just 2 parameters: the sphere size distribution (geometry), and the crossover distance (topology) that characterizes the porosity distribution. The theory is readily extended to include e.g. alternative geometries and bi-modal size distributions. Comparisons with historical data are promising.

  16. Wavefront-Guided Scleral Lens Correction in Keratoconus

    PubMed Central

    Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.

    2014-01-01

    Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371

  17. THE STELLAR MASS FUNDAMENTAL PLANE AND COMPACT QUIESCENT GALAXIES AT z < 0.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahid, H. Jabran; Damjanov, Ivana; Geller, Margaret J.

    2016-04-20

    We examine the evolution of the relation between stellar mass surface density, velocity dispersion, and half-light radius—the stellar mass fundamental plane (MFP)—for quiescent galaxies at z < 0.6. We measure the local relation from galaxies in the Sloan Digital Sky Survey and the intermediate redshift relation from ∼500 quiescent galaxies with stellar masses 10 ≲ log( M {sub *}/ M {sub ⊙}) ≲ 11.5. Nearly half of the quiescent galaxies in our intermediate redshift sample are compact. After accounting for important selection and systematic effects, the velocity dispersion distribution of galaxies at intermediate redshifts is similar to that of galaxiesmore » in the local universe. Galaxies at z < 0.6 appear to be smaller (≲0.1 dex) than galaxies in the local sample. The orientation of the stellar MFP is independent of redshift for massive quiescent galaxies at z < 0.6 and the zero-point evolves by ∼0.04 dex. Compact quiescent galaxies fall on the same relation as the extended objects. We confirm that compact quiescent galaxies are the tail of the size and mass distribution of the normal quiescent galaxy population.« less

  18. Spatial and size distributions of garnets grown in a pseudotachylyte generated during a lower crust earthquake

    NASA Astrophysics Data System (ADS)

    Clerc, Adriane; Renard, François; Austrheim, Håkon; Jamtveit, Bjørn

    2018-05-01

    In the Bergen Arc, western Norway, rocks exhumed from the lower crust record earthquakes that formed during the Caledonian collision. These earthquakes occurred at about 30-50 km depth under granulite or amphibolite facies metamorphic conditions. Coseismic frictional heating produced pseudotachylytes in this area. We describe pseudotachylytes using field data to infer earthquake magnitude (M ≥ 6.6), low dynamic friction during rupture propagation (μd < 0.1) and laboratory analyses to infer fast crystallization of microlites in the pseudotachylyte, within seconds of the earthquake arrest. High resolution 3D X-ray microtomography imaging reveals the microstructure of a pseudotachylyte sample, including numerous garnets and their corona of plagioclase that we infer have crystallized in the pseudotachylyte. These garnets 1) have dendritic shapes and are surrounded by plagioclase coronae almost fully depleted in iron, 2) have a log-normal volume distribution, 3) increase in volume with increasing distance away from the pseudotachylyte-host rock boundary, and 4) decrease in number with increasing distance away from the pseudotachylyte -host rock boundary. These characteristics indicate fast mineral growth, likely within seconds. We propose that these new quantitative criteria may assist in the unambiguous identification of pseudotachylytes in the field.

  19. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  20. Log Distribution, Persistence, and Geomorphic Function in Streams and Rivers, in the Northeastern U.S.

    NASA Astrophysics Data System (ADS)

    St Pierre, L.; Burchsted, D.; Warren, D.

    2015-12-01

    Large wood provides critical ecosystem services such as fish habitat, temperature regulation and bank stabilization. In the northeastern U.S., the distribution of large wood is documented; however, there is little understanding of the movement, longevity and geomorphic function. This research examines the hypothesis that tree species control the persistence and geomorphic function of instream wood in the Appalachian region of the northeastern U.S. To do this, we assessed size, location, and species of logs in New Hampshire rivers, including locations in the White Mountain National Forest (WMNF) where these data were collected ten years ago. We expanded the previous dataset to include assessment of geomorphic function, including creation of diversion channels, pool formation, and sediment storage, among others. We also added new sites in the WMNF and sites on a large rural river in southwestern NH to increase the range of geomorphic variables to now include: confined and unconfined channels; 1st to 4th order streams; low to high gradient; meandering, multithreaded, and straight channels; and land use such as historic logging, modern agriculture, and post-agricultural abandonment. At each study site, we located all large logs (>10cm diameter, > 1m length) and log jams (>3 accumulated logs that provide a geomorphic function) along 100m-700m reaches. We marked each identified log with a numbered tag and recorded species, diameter, length, orientation, GPS location, tag number, and photographs. We assessed function and accumulation, decay, stability, and source classes for each log. Along each reach we measured riparian forest composition and structure and channel width. Preliminary analysis suggests that tree species significantly affects the function of logs: yellow birch and American sycamore are highly represented. Additionally, geomorphic setting also plays a primary role, where unconfined reaches have large logs that provide important functions; those functions are rarely contributed by logs in confined channels. Land use limit the ability of logs to provide habitat for vegetation recruitment, notable in rivers adjacent to agricultural areas that maintain a straight channel; invasive vegetation dominate the banks and there is little to no recruitment of native vegetation.

  1. VALORATE: fast and accurate log-rank test in balanced and unbalanced comparisons of survival curves and cancer genomics.

    PubMed

    Treviño, Victor; Tamez-Pena, Jose

    2017-06-15

    The association of genomic alterations to outcomes in cancer is affected by a problem of unbalanced groups generated by the low frequency of alterations. For this, an R package (VALORATE) that estimates the null distribution and the P -value of the log-rank based on a recent reformulation is presented. For a given number of alterations that define the size of survival groups, the log-rank density is estimated by a weighted sum of conditional distributions depending on a co-occurrence term of mutations and events. The estimations are accurately accelerated by sampling across co-occurrences allowing the analysis of large genomic datasets in few minutes. In conclusion, the proposed VALORATE R package is a valuable tool for survival analysis. The R package is available in CRAN at https://cran.r-project.org and in http://bioinformatica.mty.itesm.mx/valorateR . vtrevino@itesm.mx. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2015-12-01

    The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.

  3. Characteristics of large particles and their effects on the submarine light field

    NASA Astrophysics Data System (ADS)

    Hou, Weilin

    Large particles play important roles in the ocean by modifying the underwater light field and effecting material transfer. The particle size distribution of large particles has been measured in-situ with multiple- camera video microscopy and the automated particle sizing and recognition software developed. Results show that there are more large particles in coastal waters than previously thaught, based upon by a hyperbolic size- distribution curve with a (log-log) slope parameter of close to 3 instead of 4 for the particles larger than 100μm diameter. Larger slopes are more typical for particles in the open ocean. This slope permits estimation of the distribution into the small-particle size range for use in correcting the beam-attenuation measurements for near-forward scattering. The large- particle slope and c-meter were used to estimate the small-particle size distributions which nearly matched those measured with a Coulter Counteroler (3.05%). There is also a fair correlation (r2=0.729) between the slope of the distribution and its concentration parameters. Scattering by large particles is influenced by not only the concentrations of these particles, but also the scattering phase functions. This first in-situ measurement of large-particle scattering with multiple angles reveals that they scatter more in the backward direction than was previously believed, and the enhanced backscattering can be explained in part by multiple scattering of aggregated particles. Proper identification of these large particles can be of great help in understanding the status of the ecosystem. By extracting particle features using high-resolution video images via moment-invariant functions and applying this information to lower-resolution images, we increase the effective sample volume without severely degrading classification efficiency. Traditional pattern recognition algorithms of images classified zooplankton with results within 24% of zooplankton collected using bottle samples. A faster particle recognition scheme using optical scattering is introduced and test results are satisfactory with an average error of 32%. This method promises given that the signal-to-noise ratio of the observations can be improved.

  4. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  5. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  6. Novel fuelbed characteristics associated with mechanical mastication treatments in northern California and south-western Oregon, USA

    Treesearch

    Jeffrey M. Kane; J. Morgan Varner; Eric E. Knapp

    2009-01-01

    Mechanically masticated fuelbeds are distinct from natural or logging slash fuelbeds, with different particle size distributions, bulk density, and particle shapes, leading to challenges in predicting fire behavior and effects. Our study quantified some physical properties of fuel particles (e.g. squared quadratic mean diameter, proportion of non-cylindrical particles...

  7. Inactivation of Alicyclobacillus acidoterrestris ATCC 49025 spores in apple juice by pulsed light. Influence of initial contamination and required reduction levels.

    PubMed

    Ferrario, Mariana I; Guerrero, Sandra N

    The purpose of this study was to analyze the response of different initial contamination levels of Alicyclobacillus acidoterrestris ATCC 49025 spores in apple juice as affected by pulsed light treatment (PL, batch mode, xenon lamp, 3pulses/s, 0-71.6J/cm 2 ). Biphasic and Weibull frequency distribution models were used to characterize the relationship between inoculum size and treatment time with the reductions achieved after PL exposure. Additionally, a second order polynomial model was computed to relate required PL processing time to inoculum size and requested log reductions. PL treatment caused up to 3.0-3.5 log reductions, depending on the initial inoculum size. Inactivation curves corresponding to PL-treated samples were adequately characterized by both Weibull and biphasic models (R adj 2 94-96%), and revealed that lower initial inoculum sizes were associated with higher inactivation rates. According to the polynomial model, the predicted time for PL treatment increased exponentially with inoculum size. Copyright © 2017 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Evaluation of bacterial run and tumble motility parameters through trajectory analysis

    NASA Astrophysics Data System (ADS)

    Liang, Xiaomeng; Lu, Nanxi; Chang, Lin-Ching; Nguyen, Thanh H.; Massoudieh, Arash

    2018-04-01

    In this paper, a method for extraction of the behavior parameters of bacterial migration based on the run and tumble conceptual model is described. The methodology is applied to the microscopic images representing the motile movement of flagellated Azotobacter vinelandii. The bacterial cells are considered to change direction during both runs and tumbles as is evident from the movement trajectories. An unsupervised cluster analysis was performed to fractionate each bacterial trajectory into run and tumble segments, and then the distribution of parameters for each mode were extracted by fitting mathematical distributions best representing the data. A Gaussian copula was used to model the autocorrelation in swimming velocity. For both run and tumble modes, Gamma distribution was found to fit the marginal velocity best, and Logistic distribution was found to represent better the deviation angle than other distributions considered. For the transition rate distribution, log-logistic distribution and log-normal distribution, respectively, was found to do a better job than the traditionally agreed exponential distribution. A model was then developed to mimic the motility behavior of bacteria at the presence of flow. The model was applied to evaluate its ability to describe observed patterns of bacterial deposition on surfaces in a micro-model experiment with an approach velocity of 200 μm/s. It was found that the model can qualitatively reproduce the attachment results of the micro-model setting.

  9. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  10. Predicting the extent of heterogeneity in meta-analysis, using empirical data from the Cochrane Database of Systematic Reviews

    PubMed Central

    Turner, Rebecca M; Davey, Jonathan; Clarke, Mike J; Thompson, Simon G; Higgins, Julian PT

    2012-01-01

    Background Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Methods Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Results Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10–26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58–95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (−2.13, 1.582) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Conclusions Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies. PMID:22461129

  11. The letter contrast sensitivity test: clinical evaluation of a new design.

    PubMed

    Haymes, Sharon A; Roberts, Kenneth F; Cruess, Alan F; Nicolela, Marcelo T; LeBlanc, Raymond P; Ramsey, Michael S; Chauhan, Balwantray C; Artes, Paul H

    2006-06-01

    To compare the reliability, validity, and responsiveness of the Mars Letter Contrast Sensitivity (CS) Test to the Pelli-Robson CS Chart. One eye of 47 normal control subjects, 27 patients with open-angle glaucoma, and 17 with age-related macular degeneration (AMD) was tested twice with the Mars test and twice with the Pelli-Robson test, in random order on separate days. In addition, 17 patients undergoing cataract surgery were tested, once before and once after surgery. The mean Mars CS was 1.62 log CS (0.06 SD) for normal subjects aged 22 to 77 years, with significantly lower values in patients with glaucoma or AMD (P<0.001). Mars test-retest 95% limits of agreement (LOA) were +/-0.13, +/-0.19, and +/-0.24 log CS for normal, glaucoma, and AMD, respectively. In comparison, Pelli-Robson test-retest 95% LOA were +/-0.18, +/-0.19, and +/-0.33 log CS. The Spearman correlation between the Mars and Pelli-Robson tests was 0.83 (P<0.001). However, systematic differences were observed, particularly at the upper-normal end of the range, where Mars CS was lower than Pelli-Robson CS. After cataract surgery, Mars and Pelli-Robson effect size statistics were 0.92 and 0.88, respectively. The results indicate the Mars test has test-retest reliability equal to or better than the Pelli-Robson test and comparable responsiveness. The strong correlation between the tests provides evidence the Mars test is valid. However, systematic differences indicate normative values are likely to be different for each test. The Mars Letter CS Test is a useful and practical alternative to the Pelli-Robson CS Chart.

  12. A small-diameter NMR logging tool for groundwater investigations

    USGS Publications Warehouse

    Walsh, David; Turner, Peter; Grunewald, Elliot; Zhang, Hong; Butler, James J.; Reboulet, Ed; Knobbe, Steve; Christy, Tom; Lane, John W.; Johnson, Carole D.; Munday, Tim; Fitzpatrick, Andrew

    2013-01-01

    A small-diameter nuclear magnetic resonance (NMR) logging tool has been developed and field tested at various sites in the United States and Australia. A novel design approach has produced relatively inexpensive, small-diameter probes that can be run in open or PVC-cased boreholes as small as 2 inches in diameter. The complete system, including surface electronics and various downhole probes, has been successfully tested in small-diameter monitoring wells in a range of hydrogeological settings. A variant of the probe that can be deployed by a direct-push machine has also been developed and tested in the field. The new NMR logging tool provides reliable, direct, and high-resolution information that is of importance for groundwater studies. Specifically, the technology provides direct measurement of total water content (total porosity in the saturated zone or moisture content in the unsaturated zone), and estimates of relative pore-size distribution (bound vs. mobile water content) and hydraulic conductivity. The NMR measurements show good agreement with ancillary data from lithologic logs, geophysical logs, and hydrogeologic measurements, and provide valuable information for groundwater investigations.

  13. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Bubble-chip analysis of human origin distributions demonstrates on a genomic scale significant clustering into zones and significant association with transcription

    PubMed Central

    Mesner, Larry D.; Valsakumar, Veena; Karnani, Neerja; Dutta, Anindya; Hamlin, Joyce L.; Bekiranov, Stefan

    2011-01-01

    We have used a novel bubble-trapping procedure to construct nearly pure and comprehensive human origin libraries from early S- and log-phase HeLa cells, and from log-phase GM06990, a karyotypically normal lymphoblastoid cell line. When hybridized to ENCODE tiling arrays, these libraries illuminated 15.3%, 16.4%, and 21.8% of the genome in the ENCODE regions, respectively. Approximately half of the origin fragments cluster into zones, and their signals are generally higher than those of isolated fragments. Interestingly, initiation events are distributed about equally between genic and intergenic template sequences. While only 13.2% and 14.0% of genes within the ENCODE regions are actually transcribed in HeLa and GM06990 cells, 54.5% and 25.6% of zonal origin fragments overlap transcribed genes, most with activating chromatin marks in their promoters. Our data suggest that cell synchronization activates a significant number of inchoate origins. In addition, HeLa and GM06990 cells activate remarkably different origin populations. Finally, there is only moderate concordance between the log-phase HeLa bubble map and published maps of small nascent strands for this cell line. PMID:21173031

  15. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  16. What made discy galaxies giant?

    NASA Astrophysics Data System (ADS)

    Saburova, A. S.

    2018-01-01

    I studied giant discy galaxies with optical radii more than 30 kpc. The comparison of these systems with discy galaxies of moderate sizes revealed that they tend to have higher rotation velocities, B-band luminosities, H I masses and dark-to-luminous mass ratios. The giant discs follow the trend log (M_{H I})(R_{25}) found for normal sized galaxies. It indicates the absence of the peculiarities of evolution of star formation in these galaxies. The H I mass-to-luminosity ratio of giant galaxies appears not to differ from that of normal-sized galaxies, giving evidence in favour of similar star formation efficiency. I also found that the bars and rings occur more frequently among giant discs. I performed mass modelling of the subsample of 18 giant galaxies with available rotation curves and surface photometry data and constructed χ2 maps for the parameters of their dark matter haloes. These estimates indicate that giant discs tend to be formed in larger more massive and rarified dark haloes in comparison to moderate-sized galaxies. However, giant galaxies do not deviate significantly from the relations between the optical sizes and dark halo parameters for moderate-sized galaxies. These findings can rule out the catastrophic scenario of the formation of at least most of giant discs, since they follow the same relations as normal discy galaxies. The giant sizes of the discs can be due to the high radial scale of the dark matter haloes in which they were formed.

  17. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  18. The wavelength dependent model of extinction in fog and haze for free space optical communication.

    PubMed

    Grabner, Martin; Kvicera, Vaclav

    2011-02-14

    The wavelength dependence of the extinction coefficient in fog and haze is investigated using Mie single scattering theory. It is shown that the effective radius of drop size distribution determines the slope of the log-log dependence of the extinction on wavelengths in the interval between 0.2 and 2 microns. The relation between the atmospheric visibility and the effective radius is derived from the empirical relationship of liquid water content and extinction. Based on these results, the model of the relationship between visibility and the extinction coefficient with different effective radii for fog and for haze conditions is proposed.

  19. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  20. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  1. The influence of topology on hydraulic conductivity in a sand-and-gravel aquifer.

    PubMed

    Morin, Roger H; LeBlanc, Denis R; Troutman, Brent M

    2010-01-01

    A field experiment consisting of geophysical logging and tracer testing was conducted in a single well that penetrated a sand-and-gravel aquifer at the U.S. Geological Survey Toxic Substances Hydrology research site on Cape Cod, Massachusetts. Geophysical logs and flowmeter/pumping measurements were obtained to estimate vertical profiles of porosity phi, hydraulic conductivity K, temperature, and bulk electrical conductivity under background, freshwater conditions. Saline-tracer fluid was then injected into the well for 2 h and its radial migration into the surrounding deposits was monitored by recording an electromagnetic-induction log every 10 min. The field data are analyzed and interpreted primarily through the use of Archie's (1942) law to investigate the role of topological factors such as pore geometry and connectivity, and grain size and packing configuration in regulating fluid flow through these coarse-grained materials. The logs reveal no significant correlation between K and phi, and imply that groundwater models that link these two properties may not be useful at this site. Rather, it is the distribution and connectivity of the fluid phase as defined by formation factor F, cementation index m, and tortuosity alpha that primarily control the hydraulic conductivity. Results show that F correlates well with K, thereby indicating that induction logs provide qualitative information on the distribution of hydraulic conductivity. A comparison of alpha, which incorporates porosity data, with K produces only a slightly better correlation and further emphasizes the weak influence of the bulk value of varphi on K.

  2. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites: Observed cloud variability at ARM sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-17

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less

  3. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  4. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  5. Characterisation of a garnet population from the Sikkim Himalaya: implications for the mechanisms and rates of porphyroblast crystallisation

    NASA Astrophysics Data System (ADS)

    George, Freya; Gaidies, Fred

    2016-04-01

    Analysis of porphyroblast distribution in metamorphic rocks yields insight into the processes controlling metamorphic reaction rates. By coupling this textural record with microprobe analysis and phase-equilibria and diffusion modelling, a detailed view of the nucleation and growth history of metamorphic minerals can be obtained. In this study, we comprehensively characterise the 3D distribution and compositional variation of a garnet population in a garnet-grade pelitic schist of the Lesser Himalayan Sequence (Sikkim), in order to investigate both the rates and kinetic controls of porphyroblastic crystallisation. Quantification of the size, shape and spatial distribution of garnet using high-resolution μ-computed X-ray tomography and statistical analysis reveals a log-normal crystal size distribution, systematic variation of aspect ratio with crystal size, and a significantly clustered garnet texture in the study sample. The latter is indicative of interface-controlled nucleation and growth, with nucleation sites controlled principally by a heterogeneous precursor assemblage. At length-scales less than 0.7 mm, there is evidence for adjacent grains that are on average smaller than the mean size of the population; this minor ordering is attributed to secondary redistribution of porphyroblast centers and reduction of crystal sizes due to syn-kinematic growth and resorption, respectively. Geochemical traverses through centrally sectioned garnet crystals of variable size highlight several features: (1) core compositions of even the smallest crystals preserve primary prograde growth zonation, with little evidence for diffusional modification in any crystal size; (2) rim compositions are within error between grains, suggestive of sample-scale equilibration of the growth medium at the time of cessation of crystallisation; (3) different grains of equal radii display equivalent compositional zoning; and (4) gradients of compositional profiles display a steepening trend in progressively smaller grain sizes, converse to anticipated trends based on classic kinetic crystallisation theory. The observed systematic behaviour is interpreted to reflect interface-controlled rates of crystallisation, with a decrease in the rate of crystal growth of newly nucleated grains as the crystallisation interval proceeds. Numerical simulations of garnet growth successfully reproduce observed core and rim compositions, and simulations of intracrystalline diffusion yield rapid heating/cooling rates along the P-T path, in excess of 100 °C/Ma. Radial garnet crystallisation is correspondingly rapid, with minimum growth rates of 1.5 mm/Ma in the smallest crystals. Simulations suggest progressive nucleation of new generations of garnet occurred with an exponentially decreasing frequency along the prograde path; however, measured gradients indicate that core compositions developed more slowly than predicted by the model, potentially resulting in a more evenly distributed pattern of nucleation.

  6. Theory of the intermediate stage of crystal growth with applications to insulin crystallization

    NASA Astrophysics Data System (ADS)

    Barlow, D. A.

    2017-07-01

    A theory for the intermediate stage of crystal growth, where two defining equations one for population continuity and another for mass-balance, is used to study the kinetics of the supersaturation decay, the homogeneous nucleation rate, the linear growth rate and the final distribution of crystal sizes for the crystallization of bovine and porcine insulin from solution. The cited experimental reports suggest that the crystal linear growth rate is directly proportional to the square of the insulin concentration in solution for bovine insulin and to the cube of concentration for porcine. In a previous work, it was shown that the above mentioned system could be solved for the case where the growth rate is directly proportional to the normalized supersaturation. Here a more general solution is presented valid for cases where the growth rate is directly proportional to the normalized supersaturation raised to the power of any positive integer. The resulting expressions for the time dependent normalized supersaturation and crystal size distribution are compared with experimental reports for insulin crystallization. An approximation for the maximum crystal size at the end of the intermediate stage is derived. The results suggest that the largest crystal size in the distribution at the end of the intermediate stage is maximized when nucleation is restricted to be only homogeneous. Further, the largest size in the final distribution depends only weakly upon the initial supersaturation.

  7. Mass and number size distributions of emitted particulates at five important operation units in a hazardous industrial waste incineration plant.

    PubMed

    Lin, Chi-Chi; Huang, Hsiao-Lin; Hsiao, Wen-Yuan

    2016-01-01

    Past studies indicated particulates generated by waste incineration contain various hazardous compounds. The aerosol characteristics are very important for particulate hazard control and workers' protection. This study explores the detailed characteristics of emitted particulates from each important operation unit in a rotary kiln-based hazardous industrial waste incineration plant. A dust size analyzer (Grimm 1.109) and a scanning mobility particle sizer (SMPS) were used to measure the aerosol mass concentration, mass size distribution, and number size distribution at five operation units (S1-S5) during periods of normal operation, furnace shutdown, and annual maintenance. The place with the highest measured PM10 concentration was located at the area of fly ash discharge from air pollution control equipment (S5) during the period of normal operation. Fine particles (PM2.5) constituted the majority of the emitted particles from the incineration plant. The mass size distributions (elucidated) made it clear that the size of aerosols caused by the increased particulate mass, resulting from work activities, were mostly greater than 1.5 μm. Whereas the number size distributions showed that the major diameters of particulates that caused the increase of particulate number concentrations, from work activities, were distributed in the sub micrometer range. The process of discharging fly ash from air pollution control equipment can significantly increase the emission of nanoparticles. The mass concentrations and size distributions of emitted particulates were different at each operation unit. This information is valuable for managers to take appropriate strategy to reduce the particulate emission and associated worker exposure.

  8. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  9. Rescaled earthquake recurrence time statistics: application to microrepeaters

    NASA Astrophysics Data System (ADS)

    Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru

    2009-01-01

    Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.

  10. 3D modeling of effects of increased oxygenation and activity concentration in tumors treated with radionuclides and antiangiogenic drugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagerloef, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    Purpose: Formation of new blood vessels (angiogenesis) in response to hypoxia is a fundamental event in the process of tumor growth and metastatic dissemination. However, abnormalities in tumor neovasculature often induce increased interstitial pressure (IP) and further reduce oxygenation (pO{sub 2}) of tumor cells. In radiotherapy, well-oxygenated tumors favor treatment. Antiangiogenic drugs may lower IP in the tumor, improving perfusion, pO{sub 2} and drug uptake, by reducing the number of malfunctioning vessels in the tissue. This study aims to create a model for quantifying the effects of altered pO{sub 2}-distribution due to antiangiogenic treatment in combination with radionuclide therapy. Methods:more » Based on experimental data, describing the effects of antiangiogenic agents on oxygenation of GlioblastomaMultiforme (GBM), a single cell based 3D model, including 10{sup 10} tumor cells, was developed, showing how radionuclide therapy response improves as tumor oxygenation approaches normal tissue levels. The nuclides studied were {sup 90}Y, {sup 131}I, {sup 177}Lu, and {sup 211}At. The absorbed dose levels required for a tumor control probability (TCP) of 0.990 are compared for three different log-normal pO{sub 2}-distributions: {mu}{sub 1} = 2.483, {sigma}{sub 1} = 0.711; {mu}{sub 2} = 2.946, {sigma}{sub 2} = 0.689; {mu}{sub 3} = 3.689, and {sigma}{sub 3} = 0.330. The normal tissue absorbed doses will, in turn, depend on this. These distributions were chosen to represent the expected oxygen levels in an untreated hypoxic tumor, a hypoxic tumor treated with an anti-VEGF agent, and in normal, fully-oxygenated tissue, respectively. The former two are fitted to experimental data. The geometric oxygen distributions are simulated using two different patterns: one Monte Carlo based and one radially increasing, while keeping the log-normal volumetric distributions intact. Oxygen and activity are distributed, according to the same pattern. Results: As tumor pO{sub 2} approaches normal tissue levels, the therapeutic effect is improved so that the normal tissue absorbed doses can be decreased by more than 95%, while retaining TCP, in the most favorable scenario and by up to about 80% with oxygen levels previously achieved in vivo, when the least favourable oxygenation case is used as starting point. The major difference occurs in poorly oxygenated cells. This is also where the pO{sub 2}-dependence of the oxygen enhancement ratio is maximal. Conclusions: Improved tumor oxygenation together with increased radionuclide uptake show great potential for optimising treatment strategies, leaving room for successive treatments, or lowering absorbed dose to normal tissues, due to increased tumor response. Further studies of the concomitant use of antiangiogenic drugs and radionuclide therapy therefore appear merited.« less

  11. Size and Velocity Characteristics of Droplets Generated by Thin Steel Slab Continuous Casting Secondary Cooling Air-Mist Nozzles

    NASA Astrophysics Data System (ADS)

    Minchaca M, J. I.; Castillejos E, A. H.; Acosta G, F. A.

    2011-06-01

    Direct spray impingement of high temperature surfaces, 1473 K to 973 K (1200 °C to 700 °C), plays a critical role in the secondary cooling of continuously cast thin steel slabs. It is known that the spray parameters affecting the local heat flux are the water impact flux w as well as the droplet velocity and size. However, few works have been done to characterize the last two parameters in the case of dense mists ( i.e., mists with w in the range of 2 to 90 L/m2s). This makes it difficult to rationalize how the nozzle type and its operating conditions must be selected to control the cooling process. In the present study, particle/droplet image analysis was used to determine the droplet size and velocity distributions simultaneously at various locations along the major axis of the mist cross section at a distance where the steel strand would stand. The measurements were carried out at room temperature for two standard commercial air-assisted nozzles of fan-discharge type operating over a broad range of conditions of practical interest. To achieve statistically meaningful samples, at least 6000 drops were analyzed at each location. Measuring the droplet size revealed that the number and volume frequency distributions were fitted satisfactorily by the respective log-normal and Nukiyama-Tanasawa distributions. The correlation of the parameters of the distribution functions with the water- and air-nozzle pressures allowed for reasonable estimation of the mean values of the size of the droplets generated. The ensemble of measurements across the mist axis showed that the relationship between the droplet velocity and the diameter exhibited a weak positive correlation. Additionally, increasing the water flow rate at constant air pressure caused a decrease in the proportion of the water volume made of finer droplets, whereas the volume proportion of faster droplets augmented until the water flow reached a certain value, after which it decreased. Diminishing the air-to-water flow rates ratio, particularly below 10, resulted in mists of bigger and slower droplets with low impinging Weber numbers. However, increasing the air pressure maintaining a constant water flow rate caused a greater proportion of finer and faster drops with Weber numbers greater than 80, which suggests an increased probability of wet drop contact with a hot surface that would intensify heat extraction.

  12. Effects of Climate Change on Subterranean Termite Territory Size: A Simulation Study

    PubMed Central

    Lee, Sang-Hee; Chon, Tae-Soo

    2011-01-01

    In order to study how climate change affects the territory size of subterranean termites, a lattice model was used to simulate the foraging territory of the Formosan subterranean termite, Coptotermes formosanus Shiraki (Isoptera: Rhinotermitidae), and the minimized local rules that are based on empirical data from the development of termites' foraging territory was applied. A landscape was generated by randomly assigning values ranging from 0.0 to 1.0 to each lattice site, which represented the spatially distributed property of the landscape. At the beginning of the simulation run, N territory seeds - one for each founding pair, were randomly distributed on the lattice space. The territories grew during the summer and shrank during the winter. In the model, the effects of climate change were demonstrated by changes in two variables: the period of the summer season, T, and the percentage of the remaining termite cells, σ, after the shrinkage. The territory size distribution was investigated in the size descending order for the values of T (= 10, 15, ... , 50) and σ (= 10, 15, ... , 50) at a steady state after a sufficiently long time period. The distribution was separated into two regions: the larger-sized territories and the smaller-sized territories. The slope, m, of the distribution of territory size on a semi-log scale for the larger-sized territories was maximal with T (45 ≤ T ≤ 50) in the maximal range and with σ in the optimal range (30 ≤ σ ≤ 40), regardless of the value of N. The results suggest that the climate change can influence the termite territory size distribution under the proper balance of T and σ in combination. PMID:21870966

  13. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  14. The mass distribution of coarse particulate organic matter exported from an alpine headwater stream

    NASA Astrophysics Data System (ADS)

    Turowski, J. M.; Badoux, A.; Bunte, K.; Rickli, C.; Federspiel, N.; Jochner, M.

    2013-05-01

    Coarse particulate organic matter (CPOM) particles span sizes from 1 mm, with masses less than 1 mg, to large logs and whole trees, which may have masses of several hundred kilograms. Different size and mass classes play different roles in stream environments, from being the prime source of energy in stream ecosystems to macroscopically determining channel morphology and local hydraulics. We show that a single scaling exponent can describe the mass distribution of CPOM transported in the Erlenbach, a steep mountain stream in the Swiss Prealps. This exponent takes an average value of -1.8, is independent of discharge and valid for particle masses spanning almost seven orders of magnitude. Together with a rating curve of CPOM transport rates with discharge, we discuss the importance of the scaling exponent for measuring strategies and natural hazard mitigation. Similar to CPOM, the mass distribution of in-stream large woody debris can likewise be described by power law scaling distributions, with exponents varying between -1.8 and -2.0, if all in-stream material is considered, and between -1.4 and -1.8 for material locked in log jams. We expect that scaling exponents are determined by stream type, vegetation, climate, substrate properties, and the connectivity between channels and hillslopes. However, none of the descriptor variables tested here, including drainage area, channel bed slope and forested area, show a strong control on exponent value. The number of streams studied in this paper is too small to make final conclusions.

  15. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  16. Multilevel mixed effects parametric survival models using adaptive Gauss-Hermite quadrature with application to recurrent events and individual participant data meta-analysis.

    PubMed

    Crowther, Michael J; Look, Maxime P; Riley, Richard D

    2014-09-28

    Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Statistical Analysis of Hubble /WFC3 Transit Spectroscopy of Extrasolar Planets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guangwei; Deming, Drake; Knutson, Heather

    2017-10-01

    Transmission spectroscopy provides a window to study exoplanetary atmospheres, but that window is fogged by clouds and hazes. Clouds and haze introduce a degeneracy between the strength of gaseous absorption features and planetary physical parameters such as abundances. One way to break that degeneracy is via statistical studies. We collect all published HST /WFC3 transit spectra for 1.1–1.65 μ m water vapor absorption and perform a statistical study on potential correlations between the water absorption feature and planetary parameters. We fit the observed spectra with a template calculated for each planet using the Exo-transmit code. We express the magnitude ofmore » the water absorption in scale heights, thereby removing the known dependence on temperature, surface gravity, and mean molecular weight. We find that the absorption in scale heights has a positive baseline correlation with planetary equilibrium temperature; our hypothesis is that decreasing cloud condensation with increasing temperature is responsible for this baseline slope. However, the observed sample is also intrinsically degenerate in the sense that equilibrium temperature correlates with planetary mass. We compile the distribution of absorption in scale heights, and we find that this distribution is closer to log-normal than Gaussian. However, we also find that the distribution of equilibrium temperatures for the observed planets is similarly log-normal. This indicates that the absorption values are affected by observational bias, whereby observers have not yet targeted a sufficient sample of the hottest planets.« less

  18. In-residence, multiple route exposures to chlorpyrifos and diazinon estimated by indirect method models

    NASA Astrophysics Data System (ADS)

    Moschandreas, D. J.; Kim, Y.; Karuchit, S.; Ari, H.; Lebowitz, M. D.; O'Rourke, M. K.; Gordon, S.; Robertson, G.

    One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure distributions of chlorpyrifos and diazinon in the residential microenvironment using the database generated in Arizona (NHEXAS-AZ). A four-stage probability sampling design was used for sample selection. Exposures to pesticides were estimated using the indirect method of exposure calculation by combining measured concentrations of the two pesticides in multiple media with questionnaire information such as time subjects spent indoors, dietary and non-dietary items they consumed, and areas they touched. Most distributions of in-residence exposure to chlorpyrifos and diazinon were log-normal or nearly log-normal. Exposures to chlorpyrifos and diazinon vary by pesticide and route as well as by various demographic characteristics of the subjects. Comparisons of exposure to pesticides were investigated among subgroups of demographic categories, including gender, age, minority status, education, family income, household dwelling type, year the dwelling was built, pesticide use, and carpeted areas within dwellings. Residents with large carpeted areas within their dwellings have higher exposures to both pesticides for all routes than those in less carpet-covered areas. Depending on the route, several other determinants of exposure to pesticides were identified, but a clear pattern could not be established regarding the exposure differences between several subpopulation groups.

  19. Statistical Analysis of Hubble/WFC3 Transit Spectroscopy of Extrasolar Planets

    NASA Astrophysics Data System (ADS)

    Fu, Guangwei; Deming, Drake; Knutson, Heather; Madhusudhan, Nikku; Mandell, Avi; Fraine, Jonathan

    2018-01-01

    Transmission spectroscopy provides a window to study exoplanetary atmospheres, but that window is fogged by clouds and hazes. Clouds and haze introduce a degeneracy between the strength of gaseous absorption features and planetary physical parameters such as abundances. One way to break that degeneracy is via statistical studies. We collect all published HST/WFC3 transit spectra for 1.1-1.65 micron water vapor absorption, and perform a statistical study on potential correlations between the water absorption feature and planetary parameters. We fit the observed spectra with a template calculated for each planet using the Exo-Transmit code. We express the magnitude of the water absorption in scale heights, thereby removing the known dependence on temperature, surface gravity, and mean molecular weight. We find that the absorption in scale heights has a positive baseline correlation with planetary equilibrium temperature; our hypothesis is that decreasing cloud condensation with increasing temperature is responsible for this baseline slope. However, the observed sample is also intrinsically degenerate in the sense that equilibrium temperature correlates with planetary mass. We compile the distribution of absorption in scale heights, and we find that this distribution is closer to log-normal than Gaussian. However, we also find that the distribution of equilibrium temperatures for the observed planets is similarly log-normal. This indicates that the absorption values are affected by observational bias, whereby observers have not yet targeted a sufficient sample of the hottest planets.

  20. Statistical Analysis of Hubble/WFC3 Transit Spectroscopy of Extrasolar Planets

    NASA Astrophysics Data System (ADS)

    Fu, Guangwei; Deming, Drake; Knutson, Heather; Madhusudhan, Nikku; Mandell, Avi; Fraine, Jonathan

    2017-10-01

    Transmission spectroscopy provides a window to study exoplanetary atmospheres, but that window is fogged by clouds and hazes. Clouds and haze introduce a degeneracy between the strength of gaseous absorption features and planetary physical parameters such as abundances. One way to break that degeneracy is via statistical studies. We collect all published HST/WFC3 transit spectra for 1.1-1.65 μm water vapor absorption and perform a statistical study on potential correlations between the water absorption feature and planetary parameters. We fit the observed spectra with a template calculated for each planet using the Exo-transmit code. We express the magnitude of the water absorption in scale heights, thereby removing the known dependence on temperature, surface gravity, and mean molecular weight. We find that the absorption in scale heights has a positive baseline correlation with planetary equilibrium temperature; our hypothesis is that decreasing cloud condensation with increasing temperature is responsible for this baseline slope. However, the observed sample is also intrinsically degenerate in the sense that equilibrium temperature correlates with planetary mass. We compile the distribution of absorption in scale heights, and we find that this distribution is closer to log-normal than Gaussian. However, we also find that the distribution of equilibrium temperatures for the observed planets is similarly log-normal. This indicates that the absorption values are affected by observational bias, whereby observers have not yet targeted a sufficient sample of the hottest planets.

  1. Three-dimensional simulation of gas and dust in Io's Pele plume

    NASA Astrophysics Data System (ADS)

    McDoniel, William J.; Goldstein, David B.; Varghese, Philip L.; Trafton, Laurence M.

    2015-09-01

    Io's giant Pele plume rises high above the moon's surface and produces a complex deposition pattern. We use the direct simulation Monte Carlo (DSMC) method to model the flow of SO2 gas and silicate ash from the surface of the lava lake, into the umbrella-shaped canopy of the plume, and eventually onto the surface where the flow leaves black "butterfly wings" surrounded by a large red ring. We show how the geometry of the lava lake, from which the gas is emitted, is responsible for significant asymmetry in the plume and for the shape of the red deposition ring by way of complicated gas-dynamic interactions between parts of the gas flow arising from different areas in the lava lake. We develop a model for gas flow in the immediate vicinity of the lava lake and use it to show that the behavior of ash particles of less than about 2 μm in diameter in the plume is insensitive to the details of how they are introduced into the flow because they are coupled to the gas at low altitudes. We simulate dust particles in the plume to show how particle size determines the distance from the lava lake at which particles deposit on the surface, and we use this dependence to find a size distribution of black dust particles in the plume that provides the best explanation for the observed black fans to the east and west of the lava lake. This best-fit particle size distribution suggests that there may be two distinct mechanisms of black dust creation at Pele, and when two log-normal distributions are fit to our results we obtain a mean particle diameter of 88 nm. We also propose a mechanism by which the condensible plume gas might overlay black dust in areas where black coloration is not observed and compare this to the observed overlaying of Pillanian dust by Pele's red ring.

  2. On the intrinsic shape of the gamma-ray spectrum for Fermi blazars

    NASA Astrophysics Data System (ADS)

    Kang, Shi-Ju; Wu, Qingwen; Zheng, Yong-Gang; Yin, Yue; Song, Jia-Li; Zou, Hang; Feng, Jian-Chao; Dong, Ai-Jun; Wu, Zhong-Zu; Zhang, Zhi-Bin; Wu, Lin-Hui

    2018-05-01

    The curvature of the γ-ray spectrumin blazarsmay reflect the intrinsic distribution of emitting electrons, which will further give some information on the possible acceleration and cooling processes in the emitting region. The γ-ray spectra of Fermi blazars are normally fitted either by a single power-law (PL) or a log-normal (call Logarithmic Parabola, LP) form. The possible reason for this difference is not clear. We statistically explore this issue based on the different observational properties of 1419 Fermi blazars in the 3LAC Clean Sample.We find that the γ-ray flux (100MeV–100GeV) and variability index follow bimodal distributions for PL and LP blazars, where the γ-ray flux and variability index show a positive correlation. However, the distributions of γ-ray luminosity and redshift follow a unimodal distribution. Our results suggest that the bimodal distribution of γ-ray fluxes for LP and PL blazars may not be intrinsic and all blazars may have an intrinsically curved γ-ray spectrum, and the PL spectrum is just caused by the fitting effect due to less photons.

  3. RE-EXAMINING SUNSPOT TILT ANGLE TO INCLUDE ANTI-HALE STATISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClintock, B. H.; Norton, A. A.; Li, J., E-mail: u1049686@umail.usq.edu.au, E-mail: aanorton@stanford.edu, E-mail: jli@igpp.ucla.edu

    2014-12-20

    Sunspot groups and bipolar magnetic regions (BMRs) serve as an observational diagnostic of the solar cycle. We use Debrecen Photohelographic Data (DPD) from 1974-2014 that determined sunspot tilt angles from daily white light observations, and data provided by Li and Ulrich that determined sunspot magnetic tilt angle using Mount Wilson magnetograms from 1974-2012. The magnetograms allowed for BMR tilt angles that were anti-Hale in configuration, so tilt values ranged from 0 to 360° rather than the more common ±90°. We explore the visual representation of magnetic tilt angles on a traditional butterfly diagram by plotting the mean area-weighted latitude ofmore » umbral activity in each bipolar sunspot group, including tilt information. The large scatter of tilt angles over the course of a single cycle and hemisphere prevents Joy's law from being visually identified in the tilt-butterfly diagram without further binning. The average latitude of anti-Hale regions does not differ from the average latitude of all regions in both hemispheres. The distribution of anti-Hale sunspot tilt angles are broadly distributed between 0 and 360° with a weak preference for east-west alignment 180° from their expected Joy's law angle. The anti-Hale sunspots display a log-normal size distribution similar to that of all sunspots, indicating no preferred size for anti-Hale sunspots. We report that 8.4% ± 0.8% of all bipolar sunspot regions are misclassified as Hale in traditional catalogs. This percentage is slightly higher for groups within 5° of the equator due to the misalignment of the magnetic and heliographic equators.« less

  4. Internal habitat quality determines the effects of fragmentation on austral forest climbing and epiphytic angiosperms.

    PubMed

    Magrach, Ainhoa; Larrinaga, Asier R; Santamaría, Luis

    2012-01-01

    Habitat fragmentation has become one of the major threats to biodiversity worldwide, particularly in the case of forests, which have suffered enormous losses during the past decades. We analyzed how changes in patch configuration and habitat quality derived from the fragmentation of austral temperate rainforests affect the distribution of six species of forest-dwelling climbing and epiphytic angiosperms. Epiphyte and vine abundance is primarily affected by the internal characteristics of patches (such as tree size, the presence of logging gaps or the proximity to patch edges) rather than patch and landscape features (such as patch size, shape or connectivity). These responses were intimately related to species-specific characteristics such as drought- or shade-tolerance. Our study therefore suggests that plant responses to fragmentation are contingent on both the species' ecology and the specific pathways through which the study area is being fragmented, (i.e. extensive logging that shaped the boundaries of current forest patches plus recent, unregulated logging that creates gaps within patches). Management practices in fragmented landscapes should therefore consider habitat quality within patches together with other spatial attributes at landscape or patch scales.

  5. Scaling laws and properties of compositional data

    NASA Astrophysics Data System (ADS)

    Buccianti, Antonella; Albanese, Stefano; Lima, AnnaMaria; Minolfi, Giulia; De Vivo, Benedetto

    2016-04-01

    Many random processes occur in geochemistry. Accurate predictions of the manner in which elements or chemical species interact each other are needed to construct models able to treat presence of random components. Geochemical variables actually observed are the consequence of several events, some of which may be poorly defined or imperfectly understood. Variables tend to change with time/space but, despite their complexity, may share specific common traits and it is possible to model them stochastically. Description of the frequency distribution of the geochemical abundances has been an important target of research, attracting attention for at least 100 years, starting with CLARKE (1889) and continued by GOLDSCHMIDT (1933) and WEDEPOHL (1955). However, it was AHRENS (1954a,b) who focussed on the effect of skewness distributions, for example the log-normal distribution, regarded by him as a fundamental law of geochemistry. Although modeling of frequency distributions with some probabilistic models (for example Gaussian, log-normal, Pareto) has been well discussed in several fields of application, little attention has been devoted to the features of compositional data. When compositional nature of data is taken into account, the most typical distribution models for compositions are the Dirichlet and the additive logistic normal (or normal on the simplex) (AITCHISON et al. 2003; MATEU-FIGUERAS et al. 2005; MATEU-FIGUERAS and PAWLOWSKY-GLAHN 2008; MATEU-FIGUERAS et al. 2013). As an alternative, because compositional data have to be transformed from simplex space to real space, coordinates obtained by the ilr transformation or by application of the concept of balance can be analyzed by classical methods (EGOZCUE et al. 2003). In this contribution an approach coherent with the properties of compositional information is proposed and used to investigate the shape of the frequency distribution of compositional data. The purpose is to understand data-generation processes from the perspective of compositional theory. The approach is based on the use of the isometric log-ratio transformation, characterized by theoretical and practical advantages, but requiring a more complex geochemical interpretation compared with the investigation of single variables. The proposed methodology directs attention to model the frequency distributions of more complex indices, linking all the terms of the composition to better represent the dynamics of geochemical processes. An example of its application is presented and discussed by considering topsoil geochemistry of Campania Region (southern Italy). The investigated multi-element data archive contains, among others, Al, As, B, Ba, Ca, Co, Cr, Cu, Fe, K, La, Mg, Mn, Mo, Na, Ni, P, Pb, Sr, Th, Ti, V and Zn (mg/kg) contents determined in 3535 new topsoils as well as information on coordinates, geology, land cover. (BUCCIANTI et al., 2015). AHRENS, L. ,1954a. Geochim. Cosm. Acta 6, 121-131. AHRENS, L., 1954b. Geochim. Cosm. Acta 5, 49-73. AITCHISON, J., et al., 2003. Math Geol 35(6), 667-680. BUCCIANTI et al., 2015. Jour. Geoch. Explor., 159, 302-316. CLARKE, F., 1889. Phil. Society of Washington Bull. 11, 131-142. EGOZCUE, J.J. et al., 2003. Math Geol 35(3), 279-300. MATEU-FIGUERAS, G. et al, (2005), Stoch. Environ. Res. Risk Ass. 19(3), 205-214.

  6. Coma dust scattering concepts applied to the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Fink, Uwe; Rinaldi, Giovanna

    2015-09-01

    This paper describes basic concepts, as well as providing a framework, for the interpretation of the light scattered by the dust in a cometary coma as observed by instruments on a spacecraft such as Rosetta. It is shown that the expected optical depths are small enough that single scattering can be applied. Each of the quantities that contribute to the scattered intensity is discussed in detail. Using optical constants of the likely coma dust constituents, olivine, pyroxene and carbon, the scattering properties of the dust are calculated. For the resulting observable scattering intensities several particle size distributions are considered, a simple power law, power laws with a small particle cut off and a log-normal distributions with various parameters. Within the context of a simple outflow model, the standard definition of Afρ for a circular observing aperture is expanded to an equivalent Afρ for an annulus and specific line-of-sight observation. The resulting equivalence between the observed intensity and Afρ is used to predict observable intensities for 67P/Churyumov-Gerasimenko at the spacecraft encounter near 3.3 AU and near perihelion at 1.3 AU. This is done by normalizing particle production rates of various size distributions to agree with observed ground based Afρ values. Various geometries for the column densities in a cometary coma are considered. The calculations for a simple outflow model are compared with more elaborate Direct Simulation Monte Carlo Calculation (DSMC) models to define the limits of applicability of the simpler analytical approach. Thus our analytical approach can be applied to the majority of the Rosetta coma observations, particularly beyond several nuclear radii where the dust is no longer in a collisional environment, without recourse to computer intensive DSMC calculations for specific cases. In addition to a spherically symmetric 1-dimensional approach we investigate column densities for the 2-dimensional DSMC model on the day and night side of the comet. Our calculations are also applied to estimates of the dust particle densities and flux which are useful for the in-situ experiments on Rosetta.

  7. Power of tests of normality for detecting contaminated normal samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thode, H.C. Jr.; Smith, L.A.; Finch, S.J.

    1981-01-01

    Seventeen tests of normality or goodness of fit were evaluated for power at detecting a contaminated normal sample. This study used 1000 replications each of samples of size 12, 17, 25, 33, 50, and 100 from six different contaminated normal distributions. The kurtosis test was the most powerful over all sample sizes and contaminations. The Hogg and weighted Kolmogorov-Smirnov tests were second. The Kolmogorov-Smirnov, chi-squared, Anderson-Darling, and Cramer-von-Mises tests had very low power at detecting contaminated normal random variables. Tables of the power of the tests and the power curves of certain tests are given.

  8. The weight distribution of coarse particulate organic matter exported from an alpine headwater stream

    NASA Astrophysics Data System (ADS)

    Turowski, Jens; Badoux, Alexandre; Bunte, Kristin; Rickli, Christian; Federspiel, Nicole

    2013-04-01

    Coarse particulate organic matter (CPOM) spans sizes from 1 mm particles, weighing less than 1 mg, to large logs and whole trees, which may weigh several hundred kilograms. Different size and weight classes play different roles in stream environments, from being the prime source of energy in stream ecosystems to macroscopically determining channel morphology and local hydraulics. We show that a single scaling exponent can describe the weight distribution of CPOM transported in a mountain stream. This exponent is independent of discharge and valid for particle weights spanning almost seven orders of magnitude. Together with a rating curve of CPOM transport rates with discharge, we discuss the importance of the scaling exponent for measuring strategies, natural hazard mitigation and ecosystems.

  9. Background concentrations of metals in soils from selected regions in the State of Washington

    USGS Publications Warehouse

    Ames, K.C.; Prych, E.A.

    1995-01-01

    Soil samples from 60 sites in the State of Washington were collected and analyzed to determine the magnitude and variability of background concen- trations of metals in soils of the State. Samples were collected in areas that were relatively undisturbed by human activity from the most pre- dominant soils in 12 different regions that are representative of large areas of Washington State. Concentrations of metals were determined by five different laboratory methods. Concentrations of mercury and nickel determined by both the total and total-recoverable methods displayed the greatest variability, followed by chromium and copper determined by the total-recoverable method. Concentrations of other metals, such as aluminum and barium determined by the total method, varied less. Most metals concentrations were found to be more nearly log-normally than normally distributed. Total metals concentrations were not significantly different among the different regions. However, total-recoverable metals concentrations were not as similar among different regions. Cluster analysis revealed that sampling sites in three regions encompassing the Puget Sound could be regrouped to form two new regions and sites in three regions in south-central and southeastern Washington State could also be regrouped into two new regions. Concentrations for 7 of 11 total-recoverable metals correlated with total metals concentrations. Concen- trations of six total metals also correlated positively with organic carbon. Total-recoverable metals concentrations did not correlate with either organic carbon or particle size. Concentrations of metals determined by the leaching methods did not correlate with total or total-recoverable metals concentrations, nor did they correlate with organic carbon or particle size.

  10. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  11. Size distributions of aerosol and water-soluble ions in Nanjing during a crop residual burning event.

    PubMed

    Wang, Honglei; Zhu, Bin; Shen, Lijuan; Kang, Hanqing

    2012-01-01

    To investigate the impact on urban air pollution by crop residual burning outside Nanjing, aerosol concentration, pollution gas concentration, mass concentration, and water-soluble ion size distribution were observed during one event of November 4-9, 2010. Results show that the size distribution of aerosol concentration is bimodal on pollution days and normal days, with peak values at 60-70 and 200-300 nm, respectively. Aerosol concentration is 10(4) cm(-3) x nm(-1) on pollution days. The peak value of spectrum distribution of aerosol concentration on pollution days is 1.5-3.3 times higher than that on a normal day. Crop residual burning has a great impact on the concentration of fine particles. Diurnal variation of aerosol concentration is trimodal on pollution days and normal days, with peak values at 03:00, 09:00 and 19:00 local standard time. The first peak is impacted by meteorological elements, while the second and third peaks are due to human activities, such as rush hour traffic. Crop residual burning has the greatest impact on SO2 concentration, followed by NO2, O3 is hardly affected. The impact of crop residual burning on fine particles (< 2.1 microm) is larger than on coarse particles (> 2.1 microm), thus ion concentration in fine particles is higher than that in coarse particles. Crop residual burning leads to similar increase in all ion components, thus it has a small impact on the water-soluble ions order. Crop residual burning has a strong impact on the size distribution of K+, Cl-, Na+, and F- and has a weak impact on the size distributions of NH4+, Ca2+, NO3- and SO4(2-).

  12. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  13. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  14. Understanding the implementation of evidence-based care: a structural network approach.

    PubMed

    Parchman, Michael L; Scoglio, Caterina M; Schumm, Phillip

    2011-02-24

    Recent study of complex networks has yielded many new insights into phenomenon such as social networks, the internet, and sexually transmitted infections. The purpose of this analysis is to examine the properties of a network created by the 'co-care' of patients within one region of the Veterans Health Affairs. Data were obtained for all outpatient visits from 1 October 2006 to 30 September 2008 within one large Veterans Integrated Service Network. Types of physician within each clinic were nodes connected by shared patients, with a weighted link representing the number of shared patients between each connected pair. Network metrics calculated included edge weights, node degree, node strength, node coreness, and node betweenness. Log-log plots were used to examine the distribution of these metrics. Sizes of k-core networks were also computed under multiple conditions of node removal. There were 4,310,465 encounters by 266,710 shared patients between 722 provider types (nodes) across 41 stations or clinics resulting in 34,390 edges. The number of other nodes to which primary care provider nodes have a connection (172.7) is 42% greater than that of general surgeons and two and one-half times as high as cardiology. The log-log plot of the edge weight distribution appears to be linear in nature, revealing a 'scale-free' characteristic of the network, while the distributions of node degree and node strength are less so. The analysis of the k-core network sizes under increasing removal of primary care nodes shows that about 10 most connected primary care nodes play a critical role in keeping the k-core networks connected, because their removal disintegrates the highest k-core network. Delivery of healthcare in a large healthcare system such as that of the US Department of Veterans Affairs (VA) can be represented as a complex network. This network consists of highly connected provider nodes that serve as 'hubs' within the network, and demonstrates some 'scale-free' properties. By using currently available tools to explore its topology, we can explore how the underlying connectivity of such a system affects the behavior of providers, and perhaps leverage that understanding to improve quality and outcomes of care.

  15. Dimension yields from short logs of low-quality hardwood trees.

    Treesearch

    Howard N. Rosen; Harold A. Stewart; David J. Polak

    1980-01-01

    Charts are presented for determining yields of 4/4 dimension cuttings from short hardwood logs of aspen, soft maple, black cherry, yellow-poplar, and black walnut for several cutting grades and bolt sizes. Cost comparisons of short log and standard grade mixes show sizes. Cost comparisons of short log and standard grade mixes show the estimated least expensive...

  16. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  17. How does abundance scale with body size in coupled size-structured food webs?

    PubMed

    Blanchard, Julia L; Jennings, Simon; Law, Richard; Castle, Matthew D; McCloghrie, Paul; Rochet, Marie-Joëlle; Benoît, Eric

    2009-01-01

    1. Widely observed macro-ecological patterns in log abundance vs. log body mass of organisms can be explained by simple scaling theory based on food (energy) availability across a spectrum of body sizes. The theory predicts that when food availability falls with body size (as in most aquatic food webs where larger predators eat smaller prey), the scaling between log N vs. log m is steeper than when organisms of different sizes compete for a shared unstructured resource (e.g. autotrophs, herbivores and detritivores; hereafter dubbed 'detritivores'). 2. In real communities, the mix of feeding characteristics gives rise to complex food webs. Such complexities make empirical tests of scaling predictions prone to error if: (i) the data are not disaggregated in accordance with the assumptions of the theory being tested, or (ii) the theory does not account for all of the trophic interactions within and across the communities sampled. 3. We disaggregated whole community data collected in the North Sea into predator and detritivore components and report slopes of log abundance vs. log body mass relationships. Observed slopes for fish and epifaunal predator communities (-1.2 to -2.25) were significantly steeper than those for infaunal detritivore communities (-0.56 to -0.87). 4. We present a model describing the dynamics of coupled size spectra, to explain how coupling of predator and detritivore communities affects the scaling of log N vs. log m. The model captures the trophic interactions and recycling of material that occur in many aquatic ecosystems. 5. Our simulations demonstrate that the biological processes underlying growth and mortality in the two distinct size spectra lead to patterns consistent with data. Slopes of log N vs. log m were steeper and growth rates faster for predators compared to detritivores. Size spectra were truncated when primary production was too low for predators and when detritivores experienced predation pressure. 6. The approach also allows us to assess the effects of external sources of mortality (e.g. harvesting). Removal of large predators resulted in steeper predator spectra and increases in their prey (small fish and detritivores). The model predictions are remarkably consistent with observed patterns of exploited ecosystems.

  18. heterogeneous mixture distributions for multi-source extreme rainfall

    NASA Astrophysics Data System (ADS)

    Ouarda, T.; Shin, J.; Lee, T. S.

    2013-12-01

    Mixture distributions have been used to model hydro-meteorological variables showing mixture distributional characteristics, e.g. bimodality. Homogeneous mixture (HOM) distributions (e.g. Normal-Normal and Gumbel-Gumbel) have been traditionally applied to hydro-meteorological variables. However, there is no reason to restrict the mixture distribution as the combination of one identical type. It might be beneficial to characterize the statistical behavior of hydro-meteorological variables from the application of heterogeneous mixture (HTM) distributions such as Normal-Gamma. In the present work, we focus on assessing the suitability of HTM distributions for the frequency analysis of hydro-meteorological variables. In the present work, in order to estimate the parameters of HTM distributions, the meta-heuristic algorithm (Genetic Algorithm) is employed to maximize the likelihood function. In the present study, a number of distributions are compared, including the Gamma-Extreme value type-one (EV1) HTM distribution, the EV1-EV1 HOM distribution, and EV1 distribution. The proposed distribution models are applied to the annual maximum precipitation data in South Korea. The Akaike Information Criterion (AIC), the root mean squared errors (RMSE) and the log-likelihood are used as measures of goodness-of-fit of the tested distributions. Results indicate that the HTM distribution (Gamma-EV1) presents the best fitness. The HTM distribution shows significant improvement in the estimation of quantiles corresponding to the 20-year return period. It is shown that extreme rainfall in the coastal region of South Korea presents strong heterogeneous mixture distributional characteristics. Results indicate that HTM distributions are a good alternative for the frequency analysis of hydro-meteorological variables when disparate statistical characteristics are presented.

  19. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    PubMed

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P < 0.001), systolic (126.37 ± 20.25 vs. 119.21 ± 15.81 mmHg, P < 0.001) and diastolic (78.14 ± 14.21 vs. 67.54 ± 11.46 mmHg, P < 0.001) blood pressures. The distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P < 0.001) as well as between patients without retinopathy and those with non-proliferative diabetic retinopathy (NPDR); with larger AOV for smaller vessels in NPDR ( P < 0.001). Controlling for the effect of confounders, patients had a smaller total AOV, larger total SD of AOV, and a more skewed distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  20. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  1. The influence of local majority opinions on the dynamics of the Sznajd model

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno

    2014-03-01

    In this work we study a Sznajd-like opinion dynamics on a square lattice of linear size L. For this purpose, we consider that each agent has a convincing power C, that is a time-dependent quantity. Each high convincing power group of four agents sharing the same opinion may convince its neighbors to follow the group opinion, which induces an increase of the group's convincing power. In addition, we have considered that a group with a local majority opinion (3 up/1 down spins or 1 up/3 down spins) can persuade the agents neighboring the group with probability p, since the group's convincing power is high enough. The two mechanisms (convincing powers and probability p) lead to an increase of the competition among the opinions, which avoids dictatorship (full consensus, all spins parallel) for a wide range of model's parameters, and favors the occurrence of democratic states (partial order, the majority of spins pointing in one direction). We have found that the relaxation times of the model follow log-normal distributions, and that the average relaxation time τ grows with system size as τ ~ L5/2, independent of p. We also discuss the occurrence of the usual phase transition of the Sznajd model.

  2. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Resilience of a heavily logged grove of giant sequoia (Sequoiadendron giganteum) in Kings Canyon National Park, California

    USGS Publications Warehouse

    Stohlgren, Thomas J.

    1992-01-01

    The Big Stump Grove of giant sequoia (Sequoiadendron giganteum (Lindl.) Buchholz) was heavily logged between 1883 and 1889 and the stand naturally regenerated from seed following logging. In 1968, as part of a 100% sequoia tree inventory, all living sequoias (n = 3587) and dead trees and stumps (n=588) were measured (diameter at breast height, dbh) and mapped. A comparison of pre- to post-logging (85 years later in 1968) stand characteristics showed the estimated basal area of 56.7 m2 ha−1 in the pre-cut 1883 Big Stump Grove was very similar to the population mean basal area of 30 other giant sequoia groves (with more than 30 trees) in Sequoia and Kings Canyon National Parks. Sequoia density in 1968 was 1.5 times higher than the population mean, and over 45% of the basal area had been recovered after only 85 years. Assuming most re-establishment occurred over roughly a 9 year period (1883–1892), the diameter growth rate of trees less than 1.95 m dbh, averaged 6.1–6.8 mm year−1 but greatly varied as the 24 trees in the 1.8 m size class had a mean diameter growth rate of 21–24 mm year−1. Data generated by dividing the grove into 0.25 ha contiguous plots indicated that only about 3.3 ha of the pre-cut 1883 grove did not have sequoia regeneration whereas 16.5 ha of the 1968 grove had sequoia regeneration but no sign of logs or stumps. The proportion of only-regeneration plots was significantly greater (Pt=0; 1968) stand, overrepresentation of 0.3–1.2 m dbh trees may produce a bimodal size distribution lasting perhaps 800 years or more into the future. Giant sequoia stand characteristics such as age and size structure are not highly resilient and may take several centuries to approach the ‘domain’ of age or size structure typical of old-growth sequoia forests. Grove boundaries may be less stable following a major disturbance.

  4. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  6. Lognormal Behavior of the Size Distributions of Animation Characters

    NASA Astrophysics Data System (ADS)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  7. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    NASA Astrophysics Data System (ADS)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  8. Induction log responses to layered, dipping, and anisotropic formations: Induction log shoulder-bed corrections to anisotropic formations and the effect of shale anisotropy in thinly laminated sand/shale sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagiwara, Teruhiko

    1996-12-31

    Induction log responses to layered, dipping, and anisotropic formations are examined analytically. The analytical model is especially helpful in understanding induction log responses to thinly laminated binary formations, such as sand/shale sequences, that exhibit macroscopically anisotropic: resistivity. Two applications of the analytical model are discussed. In one application we examine special induction log shoulder-bed corrections for use when thin anisotropic beds are encountered. It is known that thinly laminated sand/shale sequences act as macroscopically anisotropic: formations. Hydrocarbon-bearing formations also act as macroscopically anisotropic formations when they consist of alternating layers of different grain-size distributions. When such formations are thick, inductionmore » logs accurately read the macroscopic conductivity, from which the hydrocarbon saturation in the formations can be computed. When the laminated formations are not thick, proper shoulder-bed corrections (or thin-bed corrections) should be applied to obtain the true macroscopic formation conductivity and to estimate the hydrocarbon saturation more accurately. The analytical model is used to calculate the thin-bed effect and to evaluate the shoulder-bed corrections. We will show that the formation resistivity and hence the hydrocarbon saturation are greatly overestimated when the anisotropy effect is not accounted for and conventional shoulder-bed corrections are applied to the log responses from such laminated formations.« less

  9. SU-D-BRC-03: Development and Validation of an Online 2D Dose Verification System for Daily Patient Plan Delivery Accuracy Check

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, J; Hu, W; Xing, Y

    Purpose: All plan verification systems for particle therapy are designed to do plan verification before treatment. However, the actual dose distributions during patient treatment are not known. This study develops an online 2D dose verification tool to check the daily dose delivery accuracy. Methods: A Siemens particle treatment system with a modulated scanning spot beam is used in our center. In order to do online dose verification, we made a program to reconstruct the delivered 2D dose distributions based on the daily treatment log files and depth dose distributions. In the log files we can get the focus size, positionmore » and particle number for each spot. A gamma analysis is used to compare the reconstructed dose distributions with the dose distributions from the TPS to assess the daily dose delivery accuracy. To verify the dose reconstruction algorithm, we compared the reconstructed dose distributions to dose distributions measured using PTW 729XDR ion chamber matrix for 13 real patient plans. Then we analyzed 100 treatment beams (58 carbon and 42 proton) for prostate, lung, ACC, NPC and chordoma patients. Results: For algorithm verification, the gamma passing rate was 97.95% for the 3%/3mm and 92.36% for the 2%/2mm criteria. For patient treatment analysis,the results were 97.7%±1.1% and 91.7%±2.5% for carbon and 89.9%±4.8% and 79.7%±7.7% for proton using 3%/3mm and 2%/2mm criteria, respectively. The reason for the lower passing rate for the proton beam is that the focus size deviations were larger than for the carbon beam. The average focus size deviations were −14.27% and −6.73% for proton and −5.26% and −0.93% for carbon in the x and y direction respectively. Conclusion: The verification software meets our requirements to check for daily dose delivery discrepancies. Such tools can enhance the current treatment plan and delivery verification processes and improve safety of clinical treatments.« less

  10. The influence of topology on hydraulic conductivity in a sand-and-gravel aquifer

    USGS Publications Warehouse

    Morin, Roger H.; LeBlanc, Denis R.; Troutman, Brent M.

    2010-01-01

    A field experiment consisting of geophysical logging and tracer testing was conducted in a single well that penetrated a sand-and-gravel aquifer at the U.S. Geological Survey Toxic Substances Hydrology research site on Cape Cod, Massachusetts. Geophysical logs and flowmeter/pumping measurements were obtained to estimate vertical profiles of porosity ϕ, hydraulic conductivity K, temperature, and bulk electrical conductivity under background, freshwater conditions. Saline-tracer fluid was then injected into the well for 2 h and its radial migration into the surrounding deposits was monitored by recording an electromagnetic-induction log every 10 min. The field data are analyzed and interpreted primarily through the use of Archie's (1942) law to investigate the role of topological factors such as pore geometry and connectivity, and grain size and packing configuration in regulating fluid flow through these coarse-grained materials. The logs reveal no significant correlation between K and ϕ, and imply that groundwater models that link these two properties may not be useful at this site. Rather, it is the distribution and connectivity of the fluid phase as defined by formation factor F, cementation index m, and tortuosity α that primarily control the hydraulic conductivity. Results show that F correlates well with K, thereby indicating that induction logs provide qualitative information on the distribution of hydraulic conductivity. A comparison of α, which incorporates porosity data, with K produces only a slightly better correlation and further emphasizes the weak influence of the bulk value of ϕ on K.

  11. The influence of topology on hydraulic conductivity in a sand-and-gravel aquifer

    USGS Publications Warehouse

    Morin, R.H.; LeBlanc, D.R.; Troutman, B.M.

    2010-01-01

    A field experiment consisting of geophysical logging and tracer testing was conducted in a single well that penetrated a sand-and-gravel aquifer at the U.S. Geological Survey Toxic Substances Hydrology research site on Cape Cod, Massachusetts. Geophysical logs and flowmeter/pumping measurements were obtained to estimate vertical profiles of porosity ??, hydraulic conductivity K, temperature, and bulk electrical conductivity under background, freshwater conditions. Saline-tracer fluid was then injected into the well for 2 h and its radial migration into the surrounding deposits was monitored by recording an electromagnetic-induction log every 10 min. The field data are analyzed and interpreted primarily through the use of Archie's (1942) law to investigate the role of topological factors such as pore geometry and connectivity, and grain size and packing configuration in regulating fluid flow through these coarse-grained materials. The logs reveal no significant correlation between K and ??, and imply that groundwater models that link these two properties may not be useful at this site. Rather, it is the distribution and connectivity of the fluid phase as defined by formation factor F, cementation index m, and tortuosity ?? that primarily control the hydraulic conductivity. Results show that F correlates well with K, thereby indicating that induction logs provide qualitative information on the distribution of hydraulic conductivity. A comparison of ??, which incorporates porosity data, with K produces only a slightly better correlation and further emphasizes the weak influence of the bulk value of ?? on K. Copyright ?? 2009 The Author(s) are Federal Government Employees. Journal compilation ?? 2009 National Ground Water Association.

  12. Methodological study of affine transformations of gene expression data with proposed robust non-parametric multi-dimensional normalization method.

    PubMed

    Bengtsson, Henrik; Hössjer, Ola

    2006-03-01

    Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.

  13. Evaluating quantitative 3-D image analysis as a design tool for low enriched uranium fuel compacts for the transient reactor test facility: A preliminary study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, J. J.; van Rooyen, I. J.; Craft, A. E.

    In this study, 3-D image analysis when combined with a non-destructive examination technique such as X-ray computed tomography (CT) provides a highly quantitative tool for the investigation of a material’s structure. In this investigation 3-D image analysis and X-ray CT were combined to analyze the microstructure of a preliminary subsized fuel compact for the Transient Reactor Test Facility’s low enriched uranium conversion program to assess the feasibility of the combined techniques for use in the optimization of the fuel compact fabrication process. The quantitative image analysis focused on determining the size and spatial distribution of the surrogate fuel particles andmore » the size, shape, and orientation of voids within the compact. Additionally, the maximum effect of microstructural features on heat transfer through the carbonaceous matrix of the preliminary compact was estimated. The surrogate fuel particles occupied 0.8% of the compact by volume with a log-normal distribution of particle sizes with a mean diameter of 39 μm and a standard deviation of 16 μm. Roughly 39% of the particles had a diameter greater than the specified maximum particle size of 44 μm suggesting that the particles agglomerate during fabrication. The local volume fraction of particles also varies significantly within the compact although uniformities appear to be evenly dispersed throughout the analysed volume. The voids produced during fabrication were on average plate-like in nature with their major axis oriented perpendicular to the compaction direction of the compact. Finally, the microstructure, mainly the large preferentially oriented voids, may cause a small degree of anisotropy in the thermal diffusivity within the compact. α∥/α⊥, the ratio of thermal diffusivities parallel to and perpendicular to the compaction direction are expected to be no less than 0.95 with an upper bound of 1.« less

  14. Evaluating quantitative 3-D image analysis as a design tool for low enriched uranium fuel compacts for the transient reactor test facility: A preliminary study

    DOE PAGES

    Kane, J. J.; van Rooyen, I. J.; Craft, A. E.; ...

    2016-02-05

    In this study, 3-D image analysis when combined with a non-destructive examination technique such as X-ray computed tomography (CT) provides a highly quantitative tool for the investigation of a material’s structure. In this investigation 3-D image analysis and X-ray CT were combined to analyze the microstructure of a preliminary subsized fuel compact for the Transient Reactor Test Facility’s low enriched uranium conversion program to assess the feasibility of the combined techniques for use in the optimization of the fuel compact fabrication process. The quantitative image analysis focused on determining the size and spatial distribution of the surrogate fuel particles andmore » the size, shape, and orientation of voids within the compact. Additionally, the maximum effect of microstructural features on heat transfer through the carbonaceous matrix of the preliminary compact was estimated. The surrogate fuel particles occupied 0.8% of the compact by volume with a log-normal distribution of particle sizes with a mean diameter of 39 μm and a standard deviation of 16 μm. Roughly 39% of the particles had a diameter greater than the specified maximum particle size of 44 μm suggesting that the particles agglomerate during fabrication. The local volume fraction of particles also varies significantly within the compact although uniformities appear to be evenly dispersed throughout the analysed volume. The voids produced during fabrication were on average plate-like in nature with their major axis oriented perpendicular to the compaction direction of the compact. Finally, the microstructure, mainly the large preferentially oriented voids, may cause a small degree of anisotropy in the thermal diffusivity within the compact. α∥/α⊥, the ratio of thermal diffusivities parallel to and perpendicular to the compaction direction are expected to be no less than 0.95 with an upper bound of 1.« less

  15. Comprehensive studies of ultrashort laser pulse ablation of tin target at terawatt power

    NASA Astrophysics Data System (ADS)

    Elsied, Ahmed M.; Diwakar, Prasoon K.; Hassanein, Ahmed

    2018-01-01

    The fundamental properties of ultrashort laser interactions with metals using up to terawatt power were comprehensively studied, i.e., specifically mass ablation, nanoparticle formation, and ion dynamics using multitude of diagnostic techniques. Results of this study can be useful in many fields of research including spectroscopy, micromachining, thin film fabrication, particle acceleration, physics of warm dense matter, and equation-of-state determination. A Ti:Sapphire femtosecond laser system (110 mJ maximum energy, 40 fs, 800 nm, P-polarized, single pulse mode) was used, which delivered up to 3 terawatt laser power to ablate 1 mm tin film in vacuum. The experimental analysis includes the effect of the incident laser fluence on the ablated mass, size of the ablated area, and depth of ablation using white light profilometer. Atomic force microscope was used to measure the emitted particles size distribution at different laser fluence. Faraday cup (FC) detector was used to analyze the emitted ions flux by measuring the velocity, and the total charge of the emitted ions. The study shows that the size of emitted particles follows log-normal distribution with peak shifts depending on incident laser fluence. The size of the ablated particles ranges from 20 to 80 nm. The nanoparticles deposited on the wafer tend to aggregate and to be denser as the incident laser fluence increases as shown by AFM images. Laser ablation depth was found to increase logarithmically with laser fluence then leveling off at laser fluence > 400 J/cm2. The total ablated mass tends to increase logarithmically with laser fluence up to 60 J/cm2 while, increases gradually at higher fluence due to the increase in the ablated area. The measured ion emitted flux shows a linear dependence on laser fluence with two distinct regimes. Strong dependence on laser fluence was observed at fluences < 350 J/cm2. Also, a slight enhancement in ion velocity was observed with increasing laser fluence up to 350 J/cm2.

  16. IMPLEMENTING A NOVEL CYCLIC CO2 FLOOD IN PALEOZOIC REEFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James R. Wood; W. Quinlan; A. Wylie

    2003-07-01

    Recycled CO2 will be used in this demonstration project to produce bypassed oil from the Silurian Charlton 6 pinnacle reef (Otsego County) in the Michigan Basin. Contract negotiations by our industry partner to gain access to this CO2 that would otherwise be vented to the atmosphere are near completion. A new method of subsurface characterization, log curve amplitude slicing, is being used to map facies distributions and reservoir properties in two reefs, the Belle River Mills and Chester 18 Fields. The Belle River Mills and Chester18 fields are being used as typefields because they have excellent log-curve and core datamore » coverage. Amplitude slicing of the normalized gamma ray curves is showing trends that may indicate significant heterogeneity and compartmentalization in these reservoirs. Digital and hard copy data continues to be compiled for the Niagaran reefs in the Michigan Basin. Technology transfer took place through technical presentations regarding the log curve amplitude slicing technique and a booth at the Midwest PTTC meeting.« less

  17. Daily magnesium intake and serum magnesium concentration among Japanese people.

    PubMed

    Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori

    2008-01-01

    The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. The mean (+/-standard deviation) daily magnesium intake was 322 (+/-132), 323 (+/-163), and 322 (+/-147) mg/day for men, women, and the entire group, respectively. The mean (+/-standard deviation) serum magnesium concentration was 20.69 (+/-2.83), 20.69 (+/-2.88), and 20.69 (+/-2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log(10)X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed.

  18. Cyprinid fishes of the genus Neolissochilus in Peninsular Malaysia.

    PubMed

    Khaironizam, M Z; Akaria-Ismail, M; Armbruster, Jonathan W

    2015-05-22

    Meristic, morphometric and distributional patterns of cyprinid fishes of the genus Neolissochilus found in Peninsular Malaysia are presented. Based on the current concept of Neolissochilus, only two species are present: N. soroides and N. hendersoni. Neolissochilus hendersoni differs from N. soroides by having lower scale and gill raker counts. Neolissochilus soroides has three mouth types (normal with a rounded snout, snout with a truncate edge, and lobe with a comparatively thick lower lip). A PCA of log-transformed measurements did not reveal significant differences between N. hendersoni and N. soroides, or between any of the morphotypes of N. soroides; however, a CVA of log-transformed measurements successfully classified 87.1% of all specimens. Removing body size by running a CVA on all of the principal components except PC1 (which was correlated with length) only slightly decreased the successful classification rate to 86.1%. Differences in morphometrics were as great between the three morphotypes of N. soroides as between any of the morphotypes and N. hendersoni suggesting that the morphotypes should be examined in greater detail with genetic tools. The PCA of morphometrics revealed separate clouds for N. hendersoni and N. soroides, but no differences between the N. soroides morphotypes. This study revealed that N. hendersoni is recorded for the first time in the mainland area of Peninsular Malaysia. Other nominal species of Neolissochilus reported to occur in the river systems of Peninsular Malaysia are discussed. Lissochilus tweediei Herre in Herre & Myers 1937 and Tor soro Bishop 1973 are synonyms of Neolissochilus soroides.

  19. Choice of Stimulus Range and Size Can Reduce Test-Retest Variability in Glaucomatous Visual Field Defects

    PubMed Central

    Swanson, William H.; Horner, Douglas G.; Dul, Mitchell W.; Malinovsky, Victor E.

    2014-01-01

    Purpose To develop guidelines for engineering perimetric stimuli to reduce test-retest variability in glaucomatous defects. Methods Perimetric testing was performed on one eye for 62 patients with glaucoma and 41 age-similar controls on size III and frequency-doubling perimetry and three custom tests with Gaussian blob and Gabor sinusoid stimuli. Stimulus range was controlled by values for ceiling (maximum sensitivity) and floor (minimum sensitivity). Bland-Altman analysis was used to derive 95% limits of agreement on test and retest, and bootstrap analysis was used to test the hypotheses about peak variability. Results Limits of agreement for the three custom stimuli were similar in width (0.72 to 0.79 log units) and peak variability (0.22 to 0.29 log units) for a stimulus range of 1.7 log units. The width of the limits of agreement for size III decreased from 1.78 to 1.37 to 0.99 log units for stimulus ranges of 3.9, 2.7, and 1.7 log units, respectively (F = 3.23, P < 0.001); peak variability was 0.99, 0.54, and 0.34 log units, respectively (P < 0.01). For a stimulus range of 1.3 log units, limits of agreement were narrowest with Gabor and widest with size III stimuli, and peak variability was lower (P < 0.01) with Gabor (0.18 log units) and frequency-doubling perimetry (0.24 log units) than with size III stimuli (0.38 log units). Conclusions Test-retest variability in glaucomatous visual field defects was substantially reduced by engineering the stimuli. Translational Relevance The guidelines should allow developers to choose from a wide range of stimuli. PMID:25371855

  20. Choice of Stimulus Range and Size Can Reduce Test-Retest Variability in Glaucomatous Visual Field Defects.

    PubMed

    Swanson, William H; Horner, Douglas G; Dul, Mitchell W; Malinovsky, Victor E

    2014-09-01

    To develop guidelines for engineering perimetric stimuli to reduce test-retest variability in glaucomatous defects. Perimetric testing was performed on one eye for 62 patients with glaucoma and 41 age-similar controls on size III and frequency-doubling perimetry and three custom tests with Gaussian blob and Gabor sinusoid stimuli. Stimulus range was controlled by values for ceiling (maximum sensitivity) and floor (minimum sensitivity). Bland-Altman analysis was used to derive 95% limits of agreement on test and retest, and bootstrap analysis was used to test the hypotheses about peak variability. Limits of agreement for the three custom stimuli were similar in width (0.72 to 0.79 log units) and peak variability (0.22 to 0.29 log units) for a stimulus range of 1.7 log units. The width of the limits of agreement for size III decreased from 1.78 to 1.37 to 0.99 log units for stimulus ranges of 3.9, 2.7, and 1.7 log units, respectively ( F = 3.23, P < 0.001); peak variability was 0.99, 0.54, and 0.34 log units, respectively ( P < 0.01). For a stimulus range of 1.3 log units, limits of agreement were narrowest with Gabor and widest with size III stimuli, and peak variability was lower ( P < 0.01) with Gabor (0.18 log units) and frequency-doubling perimetry (0.24 log units) than with size III stimuli (0.38 log units). Test-retest variability in glaucomatous visual field defects was substantially reduced by engineering the stimuli. The guidelines should allow developers to choose from a wide range of stimuli.

  1. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    PubMed

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  2. Analytical approximations for effective relative permeability in the capillary limit

    NASA Astrophysics Data System (ADS)

    Rabinovich, Avinoam; Li, Boxiao; Durlofsky, Louis J.

    2016-10-01

    We present an analytical method for calculating two-phase effective relative permeability, krjeff, where j designates phase (here CO2 and water), under steady state and capillary-limit assumptions. These effective relative permeabilities may be applied in experimental settings and for upscaling in the context of numerical flow simulations, e.g., for CO2 storage. An exact solution for effective absolute permeability, keff, in two-dimensional log-normally distributed isotropic permeability (k) fields is the geometric mean. We show that this does not hold for krjeff since log normality is not maintained in the capillary-limit phase permeability field (Kj=k·krj) when capillary pressure, and thus the saturation field, is varied. Nevertheless, the geometric mean is still shown to be suitable for approximating krjeff when the variance of ln⁡k is low. For high-variance cases, we apply a correction to the geometric average gas effective relative permeability using a Winsorized mean, which neglects large and small Kj values symmetrically. The analytical method is extended to anisotropically correlated log-normal permeability fields using power law averaging. In these cases, the Winsorized mean treatment is applied to the gas curves for cases described by negative power law exponents (flow across incomplete layers). The accuracy of our analytical expressions for krjeff is demonstrated through extensive numerical tests, using low-variance and high-variance permeability realizations with a range of correlation structures. We also present integral expressions for geometric-mean and power law average krjeff for the systems considered, which enable derivation of closed-form series solutions for krjeff without generating permeability realizations.

  3. Geostatistics and Bayesian updating for transmissivity estimation in a multiaquifer system in Manitoba, Canada.

    PubMed

    Kennedy, Paula L; Woodbury, Allan D

    2002-01-01

    In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.

  4. A new macroseismic intensity prediction equation and magnitude estimates of the 1811-1812 New Madrid and 1886 Charleston, South Carolina, earthquakes

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Cramer, C. H.

    2013-12-01

    We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial variations in the level of CII reporting. For example, large metropolitan areas appear to have a lower level of reporting relative to rural areas. In general, we find that intensities decrease with increasing distance and decreasing magnitude, as expected. Coefficients for the IPE are c1=1.98×0.13 c2=1.76×0.02 c3=-0.0027×0.0004, and c4=-1.26×0.03. We find significant differences in mean log hypocentral distance between MMI- and CII-based reporting, particularly at smaller mean log distance and higher intensity. Values of mean log distance for CII at high intensity tend to be smaller than for MMI at the same value of intensity. The new IPE leads to magnitude estimates for the 1811-1812 New Madrid earthquakes that are within the broad range of those determined previously. Using three MMI datasets for the New Madrid mainshocks, the new relation results in estimates for the moment magnitudes of the December 16th, 1811, January 23rd, 1812, and February 7th, 1812 mainshocks and December 16th dawn aftershock of 7.1¬¬-7.4, 7.2, 7.5-7.7, and 6.7-7.2, respectively, with a magnitude uncertainty of about ×0.4 units. We estimate a magnitude of 7.0×0.3 for the 1886 Charleston, SC earthquake.

  5. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  6. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  7. In Situ Balloon-Borne Ice Particle Imaging in High-Latitude Cirrus

    NASA Astrophysics Data System (ADS)

    Kuhn, Thomas; Heymsfield, Andrew J.

    2016-09-01

    Cirrus clouds reflect incoming solar radiation, creating a cooling effect. At the same time, these clouds absorb the infrared radiation from the Earth, creating a greenhouse effect. The net effect, crucial for radiative transfer, depends on the cirrus microphysical properties, such as particle size distributions and particle shapes. Knowledge of these cloud properties is also needed for calibrating and validating passive and active remote sensors. Ice particles of sizes below 100 µm are inherently difficult to measure with aircraft-mounted probes due to issues with resolution, sizing, and size-dependent sampling volume. Furthermore, artefacts are produced by shattering of particles on the leading surfaces of the aircraft probes when particles several hundred microns or larger are present. Here, we report on a series of balloon-borne in situ measurements that were carried out at a high-latitude location, Kiruna in northern Sweden (68N 21E). The method used here avoids these issues experienced with the aircraft probes. Furthermore, with a balloon-borne instrument, data are collected as vertical profiles, more useful for calibrating or evaluating remote sensing measurements than data collected along horizontal traverses. Particles are collected on an oil-coated film at a sampling speed given directly by the ascending rate of the balloon, 4 m s-1. The collecting film is advanced uniformly inside the instrument so that an always unused section of the film is exposed to ice particles, which are measured by imaging shortly after sampling. The high optical resolution of about 4 µm together with a pixel resolution of 1.65 µm allows particle detection at sizes of 10 µm and larger. For particles that are 20 µm (12 pixel) in size or larger, the shape can be recognized. The sampling volume, 130 cm3 s-1, is well defined and independent of particle size. With the encountered number concentrations of between 4 and 400 L-1, this required about 90- to 4-s sampling times to determine particle size distributions of cloud layers. Depending on how ice particles vary through the cloud, several layers per cloud with relatively uniform properties have been analysed. Preliminary results of the balloon campaign, targeting upper tropospheric, cold cirrus clouds, are presented here. Ice particles in these clouds were predominantly very small, with a median size of measured particles of around 50 µm and about 80 % of all particles below 100 µm in size. The properties of the particle size distributions at temperatures between -36 and -67 °C have been studied, as well as particle areas, extinction coefficients, and their shapes (area ratios). Gamma and log-normal distribution functions could be fitted to all measured particle size distributions achieving very good correlation with coefficients R of up to 0.95. Each distribution features one distinct mode. With decreasing temperature, the mode diameter decreases exponentially, whereas the total number concentration increases by two orders of magnitude with decreasing temperature in the same range. The high concentrations at cold temperatures also caused larger extinction coefficients, directly determined from cross-sectional areas of single ice particles, than at warmer temperatures. The mass of particles has been estimated from area and size. Ice water content (IWC) and effective diameters are then determined from the data. IWC did vary only between 1 × 10-3 and 5 × 10-3 g m-3 at temperatures below -40 °C and did not show a clear temperature trend. These measurements are part of an ongoing study.

  8. VizieR Online Data Catalog: Double stars with wide separations in the AGK3 (Halbwachs+, 2016)

    NASA Astrophysics Data System (ADS)

    Halbwachs, J. L.; Mayor, M.; Udry, S.

    2016-10-01

    A large list of common proper motion stars selected from the third Astronomischen Gesellschaft Katalog (AGK3) was monitored with the CORAVEL (for COrrelation RAdial VELocities) spectrovelocimeter, in order to prepare a sample of physical binaries with very wide separations. In paper I,66 stars received special attention, since their radial velocities (RV) seemed to be variable. These stars were monitored over several years in order to derive the elements of their spectroscopic orbits. In addition, 10 of them received accurate RV measurements from the SOPHIE spectrograph of the T193 telescope at the Observatory of Haute-Provence. For deriving the orbital elements of double-lined spectroscopic binaries (SB2s), a new method was applied, which assumed that the RV of blended measurements are linear combinations of the RV of the components. 13 SB2 orbits were thus calculated. The orbital elements were eventually obtained for 52 spectroscopic binaries (SBs), two of them making a triple system. 40 SBs received their first orbit and the orbital elements were improved for 10 others. In addition, 11 SBs were discovered with very long periods for which the orbital parameters were not found. It appeared that HD 153252 has a close companion, which is a candidate brown dwarf with a minimum mass of 50 Jupiter masses. In paper II, 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. Adding CPM stars with separations close enough to be almost certain they are physical, a "bias-controlled" sample of 116 wide binaries was obtained, and used to derive the distribution of separations from 100 to 30,000 au. The distribution obtained doesn't match the log-constant distribution, but is in agreement with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical informations about the multiple systems. The close binaries in WBs seem to be similar to those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems is in agreement with the "no correlation" hypothesis; this indicates that an environment conducive to the formation of WBs doesn't favor the formation of subsystems with periods shorter than 10 years. (9 data files).

  9. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  10. Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms

    PubMed Central

    2012-01-01

    Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption. PMID:22490226

  11. The mass distribution of coarse particulate organic matter exported from an Alpine headwater stream

    NASA Astrophysics Data System (ADS)

    Turowski, J. M.; Badoux, A.; Bunte, K.; Rickli, C.; Federspiel, N.; Jochner, M.

    2013-09-01

    Coarse particulate organic matter (CPOM) particles span sizes from 1 mm, with a dry mass less than 1 mg, to large logs and entire trees, which can have a dry mass of several hundred kilograms. Pieces of different size and mass play different roles in stream environments, from being the prime source of energy in stream ecosystems to macroscopically determining channel morphology and local hydraulics. We show that a single scaling exponent can describe the mass distribution of CPOM heavier than 0.1 g transported in the Erlenbach, a steep mountain stream in the Swiss pre-Alps. This exponent takes an average value of -1.8, is independent of discharge and valid for particle masses spanning almost seven orders of magnitude. Similarly, the mass distribution of in-stream large woody debris (LWD) in several Swiss streams can be described by power law scaling distributions, with exponents varying between -1.8 and -2.0, if all in-stream LWD is considered, and between -1.3 and -1.8 for material locked in log jams. We found similar values for in-stream and transported material in the literature. We had expected that scaling exponents are determined by stream type, vegetation, climate, substrate properties, and the connectivity between channels and hillslopes. However, none of the descriptor variables tested here, including drainage area, channel bed slope and the percentage of forested area, show a strong control on exponent value. Together with a rating curve of CPOM transport rates with discharge, the scaling exponents can be used in the design of measuring strategies and in natural hazard mitigation.

  12. Experimental measurement of cooling tower emissions using image processing of sensitive papers

    NASA Astrophysics Data System (ADS)

    Ruiz, J.; Kaiser, A. S.; Ballesta, M.; Gil, A.; Lucas, M.

    2013-04-01

    Cooling tower emissions are harmful for several reasons such as air polluting, wetting, icing and solid particle deposition, but mainly due to human health hazards (i.e. Legionella). There are several methods for measuring drift drops. This paper is focussed on the sensitive paper technique, which is suitable in low drift scenarios and real conditions. The lack of an automatic classification method motivated the development of a digital image process algorithm for the Sensitive Paper method. This paper presents a detailed description of this method, in which, drop-like elements are identified by means of the Canny edge detector combined with some morphological operations. Afterwards, the application of a J48 decision tree is proposed as one of the most relevant contributions. This classification method allows us to discern between stains whose origin is a drop and stains whose origin is not a drop. The method is applied to a real case and results are presented in terms of drift and PM10 emissions. This involves the calculation of the main features of the droplet distribution at the cooling tower exit surface in terms of drop size distribution data, cumulative mass distribution curve and characteristic drop diameters. The Log-normal and the Rosin-Rammler distribution functions have been fitted to the experimental data collected in the tests and it can been concluded that the first one is the most suitable for experimental data among the functions tested (whereas the second one is less suitable). Realistic PM10 calculations include the measurement of drift emissions and Total Dissolved Solids as well as the size and number of drops. Results are compared to the method proposed by the U.S. Environmental Protection Agency assessing its overestimation. Drift emissions have found to be 0.0517% of the recirculating water, which is over the Spanish standards limit (0.05%).

  13. Collection strategy, inner morphology, and size distribution of dust particles in ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Balden, M.; Endstrasser, N.; Humrickhouse, P. W.; Rohde, V.; Rasinski, M.; von Toussaint, U.; Elgeti, S.; Neu, R.; the ASDEX Upgrade Team

    2014-07-01

    The dust collection and analysis strategy in ASDEX Upgrade (AUG) is described. During five consecutive operation campaigns (2007-2011), Si collectors were installed, which were supported by filtered vacuum sampling and collection with adhesive tapes in 2009. The outer and inner morphology (e.g. shape) and elemental composition of the collected particles were analysed by scanning electron microscopy. The majority of the ˜50 000 analysed particles on the Si collectors of campaign 2009 contain tungsten—the plasma-facing material in AUG—and show basically two different types of outer appearance: spheroids and irregularly shaped particles. By far most of the W-dominated spheroids consist of a solid W core, i.e. solidified W droplets. A part of these particles is coated with a low-Z material; a process that seems to happen presumably in the far scrape-off layer plasma. In addition, some conglomerates of B, C and W appear as spherical particles after their contact with plasma. By far most of the particles classified as B-, C- and W-dominated irregularly shaped particles consist of the same conglomerate with varying fraction of embedded W in the B-C matrix and some porosity, which can exceed 50%. The fragile structures of many conglomerates confirm the absence of intensive plasma contact. Both the ablation and mobilization of conglomerate material and the production of W droplets are proposed to be triggered by arcing. The size distribution of each dust particle class is best described by a log-normal distribution allowing an extrapolation of the dust volume and surface area. The maximum in this distribution is observed above the resolution limit of 0.28 µm only for the W-dominated spheroids, at around 1 µm. The amount of W-containing dust is extrapolated to be less than 300 mg on the horizontal areas of AUG.

  14. An analysis of fracture trace patterns in areas of flat-lying sedimentary rocks for the detection of buried geologic structure. [Kansas and Texas

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1974-01-01

    Two study areas in a cratonic platform underlain by flat-lying sedimentary rocks were analyzed to determine if a quantitative relationship exists between fracture trace patterns and their frequency distributions and subsurface structural closures which might contain petroleum. Fracture trace lengths and frequency (number of fracture traces per unit area) were analyzed by trend surface analysis and length frequency distributions also were compared to a standard Gaussian distribution. Composite rose diagrams of fracture traces were analyzed using a multivariate analysis method which grouped or clustered the rose diagrams and their respective areas on the basis of the behavior of the rays of the rose diagram. Analysis indicates that the lengths of fracture traces are log-normally distributed according to the mapping technique used. Fracture trace frequency appeared higher on the flanks of active structures and lower around passive reef structures. Fracture trace log-mean lengths were shorter over several types of structures, perhaps due to increased fracturing and subsequent erosion. Analysis of rose diagrams using a multivariate technique indicated lithology as the primary control for the lower grouping levels. Groupings at higher levels indicated that areas overlying active structures may be isolated from their neighbors by this technique while passive structures showed no differences which could be isolated.

  15. Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    PubMed Central

    Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862

  16. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    PubMed

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

  17. The Statistical Nature of Fatigue Crack Propagation

    DTIC Science & Technology

    1977-03-01

    LEVEL x - V AFFDL-TRt-T843 r THE STATISTICAL NATURE OF b FATIGUE CRACK PROPAGATION D. A. VIRKLER B. M. HILLBERR Y LL= P. K. GOEL C* SCHOOL...function of crack length was best represented by the three-parameter log-normal distribution. Six growth rate calculation methods were investigated and the...dN, which varied moderately as a function of crack length, replicate a vs. N data were predicted This predicted data reproduced the mean behavior but

  18. Particle size distribution of mainstream tobacco and marijuana smoke. Analysis using the electrical aerosol analyzer.

    PubMed

    Anderson, P J; Wilson, J D; Hiller, F C

    1989-07-01

    Accurate measurement of cigarette smoke particle size distribution is important for estimation of lung deposition. Most prior investigators have reported a mass median diameter (MMD) in the size range of 0.3 to 0.5 micron, with a small geometric standard deviation (GSD), indicating few ultrafine (less than 0.1 micron) particles. A few studies, however, have suggested the presence of ultrafine particles by reporting a smaller count median diameter (CMD). Part of this disparity may be due tot he inefficiency to previous sizing methods in measuring ultrafine size range, to evaluate size distribution of smoke from standard research cigarettes, commercial filter cigarettes, and from marijuana cigarettes with different delta 9-tetrahydrocannabinol contents. Four 35-cm3, 2-s puffs were generated at 60-s intervals, rapidly diluted, and passed through a charge neutralizer and into a 240-L chamber. Size distribution for six cigarettes of each type was measured, CMD and GSD were determined from a computer-generated log probability plot, and MMD was calculated. The size distribution parameters obtained were similar for all cigarettes tested, with an average CMD of 0.1 micron, a MMD of 0.38 micron, and a GSD of 2.0. The MMD found using the EAA is similar to that previously reported, but the CMD is distinctly smaller and the GSD larger, indicating the presence of many more ultrafine particles. These results may explain the disparity of CMD values found in existing data. Ultrafine particles are of toxicologic importance because their respiratory tract deposition is significantly higher than for particles 0.3 to 0.5 micron and because their large surface area facilitates adsorption and delivery of potentially toxic gases to the lung.

  19. Large-Scale Weibull Analysis of H-451 Nuclear- Grade Graphite Specimen Rupture Data

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Walker, Andrew; Baker, Eric H.; Murthy, Pappu L.; Bratton, Robert L.

    2012-01-01

    A Weibull analysis was performed of the strength distribution and size effects for 2000 specimens of H-451 nuclear-grade graphite. The data, generated elsewhere, measured the tensile and four-point-flexure room-temperature rupture strength of specimens excised from a single extruded graphite log. Strength variation was compared with specimen location, size, and orientation relative to the parent body. In our study, data were progressively and extensively pooled into larger data sets to discriminate overall trends from local variations and to investigate the strength distribution. The CARES/Life and WeibPar codes were used to investigate issues regarding the size effect, Weibull parameter consistency, and nonlinear stress-strain response. Overall, the Weibull distribution described the behavior of the pooled data very well. However, the issue regarding the smaller-than-expected size effect remained. This exercise illustrated that a conservative approach using a two-parameter Weibull distribution is best for designing graphite components with low probability of failure for the in-core structures in the proposed Generation IV (Gen IV) high-temperature gas-cooled nuclear reactors. This exercise also demonstrated the continuing need to better understand the mechanisms driving stochastic strength response. Extensive appendixes are provided with this report to show all aspects of the rupture data and analytical results.

  20. Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.

    PubMed

    Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R

    2012-08-01

    Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.

Top