Sample records for log-normal probability density

  1. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  2. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  3. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  4. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  5. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  6. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  7. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  8. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  9. Binary data corruption due to a Brownian agent

    NASA Astrophysics Data System (ADS)

    Newman, T. J.; Triampo, Wannapong

    1999-05-01

    We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.

  10. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  11. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  12. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  13. Relationship between the column density distribution and evolutionary class of molecular clouds as viewed by ATLASGAL

    NASA Astrophysics Data System (ADS)

    Abreu-Vicente, J.; Kainulainen, J.; Stutz, A.; Henning, Th.; Beuther, H.

    2015-09-01

    We present the first study of the relationship between the column density distribution of molecular clouds within nearby Galactic spiral arms and their evolutionary status as measured from their stellar content. We analyze a sample of 195 molecular clouds located at distances below 5.5 kpc, identified from the ATLASGAL 870 μm data. We define three evolutionary classes within this sample: starless clumps, star-forming clouds with associated young stellar objects, and clouds associated with H ii regions. We find that the N(H2) probability density functions (N-PDFs) of these three classes of objects are clearly different: the N-PDFs of starless clumps are narrowest and close to log-normal in shape, while star-forming clouds and H ii regions exhibit a power-law shape over a wide range of column densities and log-normal-like components only at low column densities. We use the N-PDFs to estimate the evolutionary time-scales of the three classes of objects based on a simple analytic model from literature. Finally, we show that the integral of the N-PDFs, the dense gas mass fraction, depends on the total mass of the regions as measured by ATLASGAL: more massive clouds contain greater relative amounts of dense gas across all evolutionary classes. Appendices are available in electronic form at http://www.aanda.org

  14. 2MASS wide-field extinction maps. V. Corona Australis

    NASA Astrophysics Data System (ADS)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2014-05-01

    We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18

  15. Detection of anomalous events

    DOEpatents

    Ferragut, Erik M.; Laska, Jason A.; Bridges, Robert A.

    2016-06-07

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The system can include a plurality of anomaly detectors that together implement an algorithm to identify low-probability events and detect atypical traffic patterns. The anomaly detector provides for comparability of disparate sources of data (e.g., network flow data and firewall logs.) Additionally, the anomaly detector allows for regulatability, meaning that the algorithm can be user configurable to adjust a number of false alerts. The anomaly detector can be used for a variety of probability density functions, including normal Gaussian distributions, irregular distributions, as well as functions associated with continuous or discrete variables.

  16. Performance of synchronous optical receivers using atmospheric compensation techniques.

    PubMed

    Belmonte, Aniceto; Khan, Joseph

    2008-09-01

    We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.

  17. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    PubMed

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  18. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  19. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  20. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  1. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  2. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  3. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  4. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  5. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  6. Investigating uplift in the South-Western Barents Sea using sonic and density well log measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Ellis, M.

    2014-12-01

    Sediments in the Barents Sea have undergone large amounts of uplift due to Plio-Pleistoncene deglaciation as well as Palaeocene-Eocene Atlantic rifting. Uplift affects the reservoir quality, seal capacity and fluid migration. Therefore, it is important to gain reliable uplift estimates in order to evaluate the petroleum prospectivity properly. To this end, a number of quantification methods have been proposed, such as Apatite Fission Track Analysis (AFTA), and integration of seismic surveys with well log data. AFTA usually provides accurate uplift estimates, but the data is limited due to its high cost. While the seismic survey can provide good uplift estimate when well data is available for calibration, the uncertainty can be large in areas where there is little to no well data. We estimated South-Western Barents Sea uplift based on well data from the Norwegian Petroleum Directorate. Primary assumptions include time-irreversible shale compaction trends and a universal normal compaction trend for a specified formation. Sonic and density logs from two Cenozoic shale formation intervals, Kolmule and Kolje, were used for the study. For each formation, we studied logs of all released wells, and established exponential normal compaction trends based on a single well. That well was then deemed the reference well, and relative uplift can be calculated at other well locations based on the offset from the normal compaction trend. We found that the amount of uplift increases along the SW to NE direction, with a maximum difference of 1,447 m from the Kolje FM estimate, and 699 m from the Kolmule FM estimate. The average standard deviation of the estimated uplift is 130 m for the Kolje FM, and 160 m for the Kolmule FM using the density log. While results from density logs and sonic logs have good agreement in general, the density log provides slightly better results in terms of higher consistency and lower standard deviation. Our results agree with published papers qualitatively with some differences in the actual amount of uplifts. The results are considered to be more accurate due to the higher resolution of the log scale data that was used.

  7. Estimating sales and sales market share from sales rank data for consumer appliances

    NASA Astrophysics Data System (ADS)

    Touzani, Samir; Van Buskirk, Robert

    2016-06-01

    Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.

  8. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  9. Muscle categorization using PDF estimation and Naive Bayes classification.

    PubMed

    Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W

    2012-01-01

    The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.

  10. Thorium normalization as a hydrocarbon accumulation indicator for Lower Miocene rocks in Ras Ghara area, Gulf of Suez, Egypt

    NASA Astrophysics Data System (ADS)

    El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.

    2018-06-01

    An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.

  11. Log-Normal Distribution of Cosmic Voids in Simulations and Mocks

    NASA Astrophysics Data System (ADS)

    Russell, E.; Pycke, J.-R.

    2017-01-01

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  12. Snag longevity in relation to wildfire and postfire salvage logging

    Treesearch

    Robin E. Russell; Victoria A. Saab; Jonathan G. Dudley; Jay J. Rotella

    2006-01-01

    Snags create nesting, foraging, and roosting habitat for a variety of wildlife species. Removal of snags through postfire salvage logging reduces the densities and size classes of snags remaining after wildfire. We determined important variables associated with annual persistence rates (the probability a snag remains standing from 1 year to the next) of large conifer...

  13. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  14. The emergence of different tail exponents in the distributions of firm size variables

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki

    2013-05-01

    We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.

  15. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  16. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  17. A Probabilistic Model for Predicting Attenuation of Viruses During Percolation in Unsaturated Natural Barriers

    NASA Astrophysics Data System (ADS)

    Faulkner, B. R.; Lyon, W. G.

    2001-12-01

    We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.

  18. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  19. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  20. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  1. Understanding star formation in molecular clouds. II. Signatures of gravitational collapse of IRDCs

    NASA Astrophysics Data System (ADS)

    Schneider, N.; Csengeri, T.; Klessen, R. S.; Tremblin, P.; Ossenkopf, V.; Peretto, N.; Simon, R.; Bontemps, S.; Federrath, C.

    2015-06-01

    We analyse column density and temperature maps derived from Herschel dust continuum observations of a sample of prominent, massive infrared dark clouds (IRDCs) i.e. G11.11-0.12, G18.82-0.28, G28.37+0.07, and G28.53-0.25. We disentangle the velocity structure of the clouds using 13CO 1→0 and 12CO 3→2 data, showing that these IRDCs are the densest regions in massive giant molecular clouds (GMCs) and not isolated features. The probability distribution function (PDF) of column densities for all clouds have a power-law distribution over all (high) column densities, regardless of the evolutionary stage of the cloud: G11.11-0.12, G18.82-0.28, and G28.37+0.07 contain (proto)-stars, while G28.53-0.25 shows no signs of star formation. This is in contrast to the purely log-normal PDFs reported for near and/or mid-IR extinction maps. We only find a log-normal distribution for lower column densities, if we perform PDFs of the column density maps of the whole GMC in which the IRDCs are embedded. By comparing the PDF slope and the radial column density profile of three of our clouds, we attribute the power law to the effect of large-scale gravitational collapse and to local free-fall collapse of pre- and protostellar cores for the highest column densities. A significant impact on the cloud properties from radiative feedback is unlikely because the clouds are mostly devoid of star formation. Independent from the PDF analysis, we find infall signatures in the spectral profiles of 12CO for G28.37+0.07 and G11.11-0.12, supporting the scenario of gravitational collapse. Our results are in line with earlier interpretations that see massive IRDCs as the densest regions within GMCs, which may be the progenitors of massive stars or clusters. At least some of the IRDCs are probably the same features as ridges (high column density regions with N> 1023 cm-2 over small areas), which were defined for nearby IR-bright GMCs. Because IRDCs are only confined to the densest (gravity dominated) cloud regions, the PDF constructed from this kind of a clipped image does not represent the (turbulence dominated) low column density regime of the cloud. The column density maps (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A29

  2. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less

  3. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  4. Multiwavelength Studies of Rotating Radio Transients

    NASA Astrophysics Data System (ADS)

    Miller, Joshua J.

    Seven years ago, a new class of pulsars called the Rotating Radio Transients (RRATs) was discovered with the Parkes radio telescope in Australia (McLaughlin et al., 2006). These neutron stars are characterized by strong radio bursts at repeatable dispersion measures, but not detectable using standard periodicity-search algorithms. We now know of roughly 100 of these objects, discovered in new surveys and re-analysis of archival survey data. They generally have longer periods than those of the normal pulsar population, and several have high magnetic fields, similar to those other neutron star populations like the X-ray bright magnetars. However, some of the RRATs have spin-down properties very similar to those of normal pulsars, making it difficult to determine the cause of their unusual emission and possible evolutionary relationships between them and other classes of neutron stars. We have calculated single-pulse flux densities for eight RRAT sources observed using the Parkes radio telescope. Like normal pulsars, the pulse amplitude distributions are well described by log-normal probability distribution functions, though two show evidence for an additional power-law tail. Spectral indices are calculated for the seven RRATs which were detected at multiple frequencies. These RRATs have a mean spectral index of = -3.2(7), or = -3.1(1) when using mean flux densities derived from fitting log-normal probability distribution functions to the pulse amplitude distributions, suggesting that the RRATs have steeper spectra than normal pulsars. When only considering the three RRATs for which we have a wide range of observing frequencies, however, and become --1.7(1) and --2.0(1), respectively, and are roughly consistent with those measured for normal pulsars. In all cases, these spectral indices exclude magnetar-like flat spectra. For PSR J1819--1458, the RRAT with the highest bursting rate, pulses were detected at 685 and 3029 MHz in simultaneous observations and have a spectral index consistent with our other analysis. We also present the results of simultaneous radio and X-ray observations of PSR J1819--1458. Our 94-ks XMM-Newton observation of the high magnetic field (~5x109 T) pulsar reveals a blackbody spectrum ( kT~130 eV) with a broad absorption feature, possibly composed of two lines at ~1.0 and ~1.3 keV. We performed a correlation analysis of the X-ray photons with radio pulses detected in 16.2 hours of simultaneous observations at 1--2 GHz with the Green Bank, Effelsberg, and Parkes telescopes, respectively. Both the detected X-ray photons and radio pulses appear to be randomly distributed in time. We find tentative evidence for a correlation between the detected radio pulses and X-ray photons on timescales of less than 10 pulsar spin periods, with the probability of this occurring by chance being 0.46%. This suggests that the physical process producing the radio pulses may also heat the polar cap.

  5. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  6. The Tail Exponent for Stock Returns in Bursa Malaysia for 2003-2008

    NASA Astrophysics Data System (ADS)

    Rusli, N. H.; Gopir, G.; Usang, M. D.

    2010-07-01

    A developed discipline of econophysics that has been introduced is exhibiting the application of mathematical tools that are usually applied to the physical models for the study of financial models. In this study, an analysis of the time series behavior of several blue chip and penny stock companies in Main Market of Bursa Malaysia has been performed. Generally, the basic quantity being used is the relative price changes or is called the stock price returns, contains daily-sampled data from the beginning of 2003 until the end of 2008, containing 1555 trading days recorded. The aim of this paper is to investigate the tail exponent in tails of the distribution for blue chip stocks and penny stocks financial returns in six years period. By using a standard regression method, it is found that the distribution performed double scaling on the log-log plot of the cumulative probability of the normalized returns. Thus we calculate α for a small scale return as well as large scale return. Based on the result obtained, it is found that the power-law behavior for the probability density functions of the stock price absolute returns P(z)˜z-α with values lying inside and outside the Lévy stable regime with values α>2. All the results were discussed in detail.

  7. The shapes of column density PDFs. The importance of the last closed contour

    NASA Astrophysics Data System (ADS)

    Alves, João; Lombardi, Marco; Lada, Charles J.

    2017-10-01

    The probability distribution function of column density (PDF) has become the tool of choice for cloud structure analysis and star formation studies. Its simplicity is attractive, and the PDF could offer access to cloud physical parameters otherwise difficult to measure, but there has been some confusion in the literature on the definition of its completeness limit and shape at the low column density end. In this letter we use the natural definition of the completeness limit of a column density PDF, the last closed column density contour inside a surveyed region, and apply it to a set of large-scale maps of nearby molecular clouds. We conclude that there is no observational evidence for log-normal PDFs in these objects. We find that all studied molecular clouds have PDFs well described by power laws, including the diffuse cloud Polaris. Our results call for a new physical interpretation of the shape of the column density PDFs. We find that the slope of a cloud PDF is invariant to distance but not to the spatial arrangement of cloud material, and as such it is still a useful tool for investigating cloud structure.

  8. Size distribution of submarine landslides along the U.S. Atlantic margin

    USGS Publications Warehouse

    Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.

    2009-01-01

    Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.

  9. Application of continuous normal-lognormal bivariate density functions in a sensitivity analysis of municipal solid waste landfill.

    PubMed

    Petrovic, Igor; Hip, Ivan; Fredlund, Murray D

    2016-09-01

    The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The analysis also showed that a satisfactory level of performance with a small probability of failure was produced for the standard practice design of waste landfills as well as an analysis scenario immediately after the landfill closure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Mixed effects modelling for glass category estimation from glass refractive indices.

    PubMed

    Lucy, David; Zadora, Grzegorz

    2011-10-10

    520 Glass fragments were taken from 105 glass items. Each item was either a container, a window, or glass from an automobile. Each of these three classes of use are defined as glass categories. Refractive indexes were measured both before, and after a programme of re-annealing. Because the refractive index of each fragment could not in itself be observed before and after re-annealing, a model based approach was used to estimate the change in refractive index for each glass category. It was found that less complex estimation methods would be equivalent to the full model, and were subsequently used. The change in refractive index was then used to calculate a measure of the evidential value for each item belonging to each glass category. The distributions of refractive index change were considered for each glass category, and it was found that, possibly due to small samples, members of the normal family would not adequately model the refractive index changes within two of the use types considered here. Two alternative approaches to modelling the change in refractive index were used, one employed more established kernel density estimates, the other a newer approach called log-concave estimation. Either method when applied to the change in refractive index was found to give good estimates of glass category, however, on all performance metrics kernel density estimates were found to be slightly better than log-concave estimates, although the estimates from log-concave estimation prossessed properties which had some qualitative appeal not encapsulated in the selected measures of performance. These results and implications of these two methods of estimating probability densities for glass refractive indexes are discussed. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  12. The Adaptation of the Moth Pheromone Receptor Neuron to its Natural Stimulus

    NASA Astrophysics Data System (ADS)

    Kostal, Lubomir; Lansky, Petr; Rospars, Jean-Pierre

    2008-07-01

    We analyze the first phase of information transduction in the model of the olfactory receptor neuron of the male moth Antheraea polyphemus. We predict such stimulus characteristics that enable the system to perform optimally, i.e., to transfer as much information as possible. Few a priori constraints on the nature of stimulus and stimulus-to-signal transduction are assumed. The results are given in terms of stimulus distributions and intermittency factors which makes direct comparison with experimental data possible. Optimal stimulus is approximatelly described by exponential or log-normal probability density function which is in agreement with experiment and the predicted intermittency factors fall within the lowest range of observed values. The results are discussed with respect to electroantennogram measurements and behavioral observations.

  13. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  14. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  15. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less

  16. The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud

    NASA Astrophysics Data System (ADS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.

  17. Impacts of logging on density-dependent predation of dipterocarp seeds in a South East Asian rainforest.

    PubMed

    Bagchi, Robert; Philipson, Christopher D; Slade, Eleanor M; Hector, Andy; Phillips, Sam; Villanueva, Jerome F; Lewis, Owen T; Lyal, Christopher H C; Nilus, Reuben; Madran, Adzley; Scholes, Julie D; Press, Malcolm C

    2011-11-27

    Much of the forest remaining in South East Asia has been selectively logged. The processes promoting species coexistence may be the key to the recovery and maintenance of diversity in these forests. One such process is the Janzen-Connell mechanism, where specialized natural enemies such as seed predators maintain diversity by inhibiting regeneration near conspecifics. In Neotropical forests, anthropogenic disturbance can disrupt the Janzen-Connell mechanism, but similar data are unavailable for South East Asia. We investigated the effects of conspecific density (two spatial scales) and distance from fruiting trees on seed and seedling survival of the canopy tree Parashorea malaanonan in unlogged and logged forests in Sabah, Malaysia. The production of mature seeds was higher in unlogged forest, perhaps because high adult densities facilitate pollination or satiate pre-dispersal predators. In both forest types, post-dispersal survival was reduced by small-scale (1 m(2)) conspecific density, but not by proximity to the nearest fruiting tree. Large-scale conspecific density (seeds per fruiting tree) reduced predation, probably by satiating predators. Higher seed production in unlogged forest, in combination with slightly higher survival, meant that recruitment was almost entirely limited to unlogged forest. Thus, while logging might not affect the Janzen-Connell mechanism at this site, it may influence the recruitment of particular species.

  18. Shape of growth-rate distribution determines the type of Non-Gibrat’s Property

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki

    2011-11-01

    In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.

  19. Consolidation patterns during initiation and evolution of a plate-boundary decollement zone: Northern Barbados accretionary prism

    USGS Publications Warehouse

    Moore, J.C.; Klaus, A.; Bangs, N.L.; Bekins, B.; Bucker, C.J.; Bruckmann, W.; Erickson, S.N.; Hansen, O.; Horton, T.; Ireland, P.; Major, C.O.; Moore, Gregory F.; Peacock, S.; Saito, S.; Screaton, E.J.; Shimeld, J.W.; Stauffer, P.H.; Taymaz, T.; Teas, P.A.; Tokunaga, T.

    1998-01-01

    Borehole logs from the northern Barbados accretionary prism show that the plate-boundary decollement initiates in a low-density radiolarian claystone. With continued thrusting, the decollement zone consolidates, but in a patchy manner. The logs calibrate a three-dimensional seismic reflection image of the decollement zone and indicate which portions are of low density and enriched in fluid, and which portions have consolidated. The seismic image demonstrates that an underconsolidated patch of the decollement zone connects to a fluid-rich conduit extending down the decollement surface. Fluid migration up this conduit probably supports the open pore structure in the underconsolidated patch.

  20. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  1. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  2. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  3. Impacts of logging on density-dependent predation of dipterocarp seeds in a South East Asian rainforest

    PubMed Central

    Bagchi, Robert; Philipson, Christopher D.; Slade, Eleanor M.; Hector, Andy; Phillips, Sam; Villanueva, Jerome F.; Lewis, Owen T.; Lyal, Christopher H. C.; Nilus, Reuben; Madran, Adzley; Scholes, Julie D.; Press, Malcolm C.

    2011-01-01

    Much of the forest remaining in South East Asia has been selectively logged. The processes promoting species coexistence may be the key to the recovery and maintenance of diversity in these forests. One such process is the Janzen–Connell mechanism, where specialized natural enemies such as seed predators maintain diversity by inhibiting regeneration near conspecifics. In Neotropical forests, anthropogenic disturbance can disrupt the Janzen–Connell mechanism, but similar data are unavailable for South East Asia. We investigated the effects of conspecific density (two spatial scales) and distance from fruiting trees on seed and seedling survival of the canopy tree Parashorea malaanonan in unlogged and logged forests in Sabah, Malaysia. The production of mature seeds was higher in unlogged forest, perhaps because high adult densities facilitate pollination or satiate pre-dispersal predators. In both forest types, post-dispersal survival was reduced by small-scale (1 m2) conspecific density, but not by proximity to the nearest fruiting tree. Large-scale conspecific density (seeds per fruiting tree) reduced predation, probably by satiating predators. Higher seed production in unlogged forest, in combination with slightly higher survival, meant that recruitment was almost entirely limited to unlogged forest. Thus, while logging might not affect the Janzen–Connell mechanism at this site, it may influence the recruitment of particular species. PMID:22006965

  4. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  5. Tangled nature model of evolutionary dynamics reconsidered: Structural and dynamical effects of trait inheritance

    NASA Astrophysics Data System (ADS)

    Andersen, Christian Walther; Sibani, Paolo

    2016-05-01

    Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K . We first show that the interactions generated by our rule are nearly independent of K . Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time tw also survives at t >tw . Finally, survival probabilities of species are shown to decay as powers of the ratio t /tw , a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.

  6. Tangled nature model of evolutionary dynamics reconsidered: Structural and dynamical effects of trait inheritance.

    PubMed

    Andersen, Christian Walther; Sibani, Paolo

    2016-05-01

    Based on the stochastic dynamics of interacting agents which reproduce, mutate, and die, the tangled nature model (TNM) describes key emergent features of biological and cultural ecosystems' evolution. While trait inheritance is not included in many applications, i.e., the interactions of an agent and those of its mutated offspring are taken to be uncorrelated, in the family of TNMs introduced in this work correlations of varying strength are parametrized by a positive integer K. We first show that the interactions generated by our rule are nearly independent of K. Consequently, the structural and dynamical effects of trait inheritance can be studied independently of effects related to the form of the interactions. We then show that changing K strengthens the core structure of the ecology, leads to population abundance distributions better approximated by log-normal probability densities, and increases the probability that a species extant at time t_{w} also survives at t>t_{w}. Finally, survival probabilities of species are shown to decay as powers of the ratio t/t_{w}, a so-called pure aging behavior usually seen in glassy systems of physical origin. We find a quantitative dynamical effect of trait inheritance, namely, that increasing the value of K numerically decreases the decay exponent of the species survival probability.

  7. Are CO Observations of Interstellar Clouds Tracing the H2?

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Glover, S. C. O.; Klessen, R. S.; Mac Low, M.

    2010-01-01

    Interstellar clouds are commonly observed through the emission of rotational transitions from carbon monoxide (CO). However, the abundance ratio of CO to molecular hydrogen (H2), which is the most abundant molecule in molecular clouds is only about 10-4. This raises the important question of whether the observed CO emission is actually tracing the bulk of the gas in these clouds, and whether it can be used to derive quantities like the total mass of the cloud, the gas density distribution function, the fractal dimension, and the velocity dispersion--size relation. To evaluate the usability and accuracy of CO as a tracer for H2 gas, we generate synthetic observations of hydrodynamical models that include a detailed chemical network to follow the formation and photo-dissociation of H2 and CO. These three-dimensional models of turbulent interstellar cloud formation self-consistently follow the coupled thermal, dynamical and chemical evolution of 32 species, with a particular focus on H2 and CO (Glover et al. 2009). We find that CO primarily traces the dense gas in the clouds, however, with a significant scatter due to turbulent mixing and self-shielding of H2 and CO. The H2 probability distribution function (PDF) is well-described by a log-normal distribution. In contrast, the CO column density PDF has a strongly non-Gaussian low-density wing, not at all consistent with a log-normal distribution. Centroid velocity statistics show that CO is more intermittent than H2, leading to an overestimate of the velocity scaling exponent in the velocity dispersion--size relation. With our systematic comparison of H2 and CO data from the numerical models, we hope to provide a statistical formula to correct for the bias of CO observations. CF acknowledges financial support from a Kade Fellowship of the American Museum of Natural History.

  8. Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam

    NASA Astrophysics Data System (ADS)

    N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.

    In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3

  9. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  10. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  11. Spectral density measurements of gyro noise

    NASA Technical Reports Server (NTRS)

    Truncale, A.; Koenigsberg, W.; Harris, R.

    1972-01-01

    Power spectral density (PSD) was used to analyze the outputs of several gyros in the frequency range from 0.01 to 200 Hz. Data were accumulated on eight inertial quality instruments. The results are described in terms of input angle noise (arcsec 2/Hz) and are presented on log-log plots of PSD. These data show that the standard deviation of measurement noise was 0.01 arcsec or less for some gyros in the passband from 1 Hz down 10 0.01 Hz and probably down to 0.001 Hz for at least one gyro. For the passband between 1 and 100 Hz, uncertainties in the 0.01 and 0.05 arcsec region were observed.

  12. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  13. Alternate methods for FAAT S-curve generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, A.M.

    The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less

  14. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  15. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  16. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-12-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≃10-2.5Mpc-1 with the upper limit B≲3nG.

  17. Novel bayes factors that capture expert uncertainty in prior density specification in genetic association studies.

    PubMed

    Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin

    2015-05-01

    Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  18. THE CHANDRA COSMOS-LEGACY SURVEY: THE z > 3 SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marchesi, S.; Civano, F.; Urry, C. M.

    2016-08-20

    We present the largest high-redshift (3 < z < 6.85) sample of X-ray-selected active galactic nuclei (AGNs) on a contiguous field, using sources detected in the Chandra COSMOS-Legacy survey. The sample contains 174 sources, 87 with spectroscopic redshift and the other 87 with photometric redshift (z {sub phot}). In this work, we treat z {sub phot} as a probability-weighted sum of contributions, adding to our sample the contribution of sources with z {sub phot} < 3 but z {sub phot} probability distribution >0 at z > 3. We compute the number counts in the observed 0.5–2 keV band, finding amore » decline in the number of sources at z > 3 and constraining phenomenological models of the X-ray background. We compute the AGN space density at z > 3 in two different luminosity bins. At higher luminosities (log L (2–10 keV) > 44.1 erg s{sup −1}), the space density declines exponentially, dropping by a factor of ∼20 from z ∼ 3 to z ∼ 6. The observed decline is ∼80% steeper at lower luminosities (43.55 erg s{sup −1} < logL(2–10 keV) < 44.1 erg s{sup −1}) from z ∼ 3 to z ∼ 4.5. We study the space density evolution dividing our sample into optically classified Type 1 and Type 2 AGNs. At log L (2–10 keV) > 44.1 erg s{sup −1}, unobscured and obscured objects may have different evolution with redshift, with the obscured component being three times higher at z ∼ 5. Finally, we compare our space density with predictions of quasar activation merger models, whose calibration is based on optically luminous AGNs. These models significantly overpredict the number of expected AGNs at log L (2–10 keV) > 44.1 erg s{sup −1} with respect to our data.« less

  19. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  20. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  1. Stick-slip behavior in a continuum-granular experiment.

    PubMed

    Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott

    2015-12-01

    We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.

  2. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  4. Clinical impact of dosimetric changes for volumetric modulated arc therapy in log file-based patient dose calculations.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2017-10-01

    A log file-based method cannot detect dosimetric changes due to linac component miscalibration because log files are insensitive to miscalibration. Herein, clinical impacts of dosimetric changes on a log file-based method were determined. Five head-and-neck and five prostate plans were applied. Miscalibration-simulated log files were generated by inducing a linac component miscalibration into the log file. Miscalibration magnitudes for leaf, gantry, and collimator at the general tolerance level were ±0.5mm, ±1°, and ±1°, respectively, and at a tighter tolerance level achievable on current linac were ±0.3mm, ±0.5°, and ±0.5°, respectively. Re-calculations were performed on patient anatomy using log file data. Changes in tumor control probability/normal tissue complication probability from treatment planning system dose to re-calculated dose at the general tolerance level was 1.8% on planning target volume (PTV) and 2.4% on organs at risk (OARs) in both plans. These changes at the tighter tolerance level were improved to 1.0% on PTV and to 1.5% on OARs, with a statistically significant difference. We determined the clinical impacts of dosimetric changes on a log file-based method using a general tolerance level and a tighter tolerance level for linac miscalibration and found that a tighter tolerance level significantly improved the accuracy of the log file-based method. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  6. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  7. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  8. A hybrid probabilistic/spectral model of scalar mixing

    NASA Astrophysics Data System (ADS)

    Vaithianathan, T.; Collins, Lance

    2002-11-01

    In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.

  9. Log file-based patient dose calculations of double-arc VMAT for head-and-neck radiotherapy.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Majima, Kazuhiro; Matsushita, Haruo; Takeda, Ken; Jingu, Keiichi

    2018-04-01

    The log file-based method cannot display dosimetric changes due to linac component miscalibration because of the insensitivity of log files to linac component miscalibration. The purpose of this study was to supply dosimetric changes in log file-based patient dose calculations for double-arc volumetric-modulated arc therapy (VMAT) in head-and-neck cases. Fifteen head-and-neck cases participated in this study. For each case, treatment planning system (TPS) doses were produced by double-arc and single-arc VMAT. Miscalibration-simulated log files were generated by inducing a leaf miscalibration of ±0.5 mm into the log files that were acquired during VMAT irradiation. Subsequently, patient doses were estimated using the miscalibration-simulated log files. For double-arc VMAT, regarding planning target volume (PTV), the change from TPS dose to miscalibration-simulated log file dose in D mean was 0.9 Gy and that for tumor control probability was 1.4%. As for organ-at-risks (OARs), the change in D mean was <0.7 Gy and normal tissue complication probability was <1.8%. A comparison between double-arc and single-arc VMAT for PTV showed statistically significant differences in the changes evaluated by D mean and radiobiological metrics (P < 0.01), even though the magnitude of these differences was small. Similarly, for OARs, the magnitude of these changes was found to be small. Using the log file-based method for PTV and OARs, the log file-based method estimate of patient dose using the double-arc VMAT has accuracy comparable to that obtained using the single-arc VMAT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. 43 CFR 2812.0-6 - Statement of policy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the O. and C. lands presents peculiar problems of management which require for their solution the... significant part by the cost of transporting the logs to the mill. Where there is an existing road which is... capacity to accommodate the probable normal requirements both of the applicant and of the Government and...

  11. Statistical hypothesis tests of some micrometeorological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SethuRaman, S.; Tichler, J.

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less

  12. Statistics of Advective Stretching in Three-dimensional Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.

    2009-09-01

    We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.

  13. Criticality and Phase Transition in Stock-Price Fluctuations

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu

    2006-02-01

    We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.

  14. [Distribution of individuals by spontaneous frequencies of lymphocytes with micronuclei. Particularity and consequences].

    PubMed

    Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I

    2011-01-01

    By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.

  15. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  16. Comparative analysis of background EEG activity in childhood absence epilepsy during valproate treatment: a standardized, low-resolution, brain electromagnetic tomography (sLORETA) study.

    PubMed

    Shin, Jung-Hyun; Eom, Tae-Hoon; Kim, Young-Hoon; Chung, Seung-Yun; Lee, In-Goo; Kim, Jung-Min

    2017-07-01

    Valproate (VPA) is an antiepileptic drug (AED) used for initial monotherapy in treating childhood absence epilepsy (CAE). EEG might be an alternative approach to explore the effects of AEDs on the central nervous system. We performed a comparative analysis of background EEG activity during VPA treatment by using standardized, low-resolution, brain electromagnetic tomography (sLORETA) to explore the effect of VPA in patients with CAE. In 17 children with CAE, non-parametric statistical analyses using sLORETA were performed to compare the current density distribution of four frequency bands (delta, theta, alpha, and beta) between the untreated and treated condition. Maximum differences in current density were found in the left inferior frontal gyrus for the delta frequency band (log-F-ratio = -1.390, P > 0.05), the left medial frontal gyrus for the theta frequency band (log-F-ratio = -0.940, P > 0.05), the left inferior frontal gyrus for the alpha frequency band (log-F-ratio = -0.590, P > 0.05), and the left anterior cingulate for the beta frequency band (log-F-ratio = -1.318, P > 0.05). However, none of these differences were significant (threshold log-F-ratio = ±1.888, P < 0.01; threshold log-F-ratio = ±1.722, P < 0.05). Because EEG background is accepted as normal in CAE, VPA would not be expected to significantly change abnormal thalamocortical oscillations on a normal EEG background. Therefore, our results agree with currently accepted concepts but are not consistent with findings in some previous studies.

  17. Determining the depositional pattern by resistivity-seismic inversion for the aquifer system of Maira area, Pakistan.

    PubMed

    Akhter, Gulraiz; Farid, Asim; Ahmad, Zulfiqar

    2012-01-01

    Velocity and density measured in a well are crucial for synthetic seismic generation which is, in turn, a key to interpreting real seismic amplitude in terms of lithology, porosity and fluid content. Investigations made in the water wells usually consist of spontaneous potential, resistivity long and short normal, point resistivity and gamma ray logs. The sonic logs are not available because these are usually run in the wells drilled for hydrocarbons. To generate the synthetic seismograms, sonic and density logs are required, which are useful to precisely mark the lithology contacts and formation tops. An attempt has been made to interpret the subsurface soil of the aquifer system by means of resistivity to seismic inversion. For this purpose, resistivity logs and surface resistivity sounding were used and the resistivity logs were converted to sonic logs whereas surface resistivity sounding data transformed into seismic curves. The converted sonic logs and the surface seismic curves were then used to generate synthetic seismograms. With the utilization of these synthetic seismograms, pseudo-seismic sections have been developed. Subsurface lithologies encountered in wells exhibit different velocities and densities. The reflection patterns were marked by using amplitude standout, character and coherence. These pseudo-seismic sections were later tied to well synthetics and lithologs. In this way, a lithology section was created for the alluvial fill. The cross-section suggested that the eastern portion of the studied area mainly consisted of sandy fill and the western portion constituted clayey part. This can be attributed to the depositional environment by the Indus and the Kabul Rivers.

  18. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  19. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    PubMed

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  20. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  1. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  2. On the probability density function and characteristic function moments of image steganalysis in the log prediction error wavelet subband

    NASA Astrophysics Data System (ADS)

    Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang

    2017-01-01

    Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.

  3. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less

  4. Probabilistic properties of wavelets in kinetic surface roughening

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.

    2001-08-01

    Using the data of a recent numerical simulation [M. Ahr and M. Biehl, Phys. Rev. E 62, 1773 (2000)] of homoepitaxial growth it is shown that the observed probability distribution of a wavelet based measure of the growing surface roughness is consistent with a stretched log-normal distribution and the corresponding branching dimension depends on the level of particle desorption.

  5. Flux-limited sample of Galactic carbon stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claussen, M.J.; Kleinmann, S.G.; Joyce, R.R.

    Published observational data (including IRAS observations) for a flux-limited sample of 215 Galactic carbon stars (CSs) selected from the 2-micron sky survey of Neugebauer and Leighton (1969) are compiled in extensive tables and graphs and analyzed statistically. The sample is found to penetrate a volume of radius 1.5 kpc, and the local CS space density and surface density are calculated as log rho0 (per cu kpc) = 2.0 + or - 0.4 and log N (per sq kpc) = 1.6 + or - 0.2, respectively. The total Galactic mass-return rate from these CSs is estimated as 0.013 solar mass/yr, implyingmore » a time scale of 0.1-1 Myr for the CS evolutionary phase and a mass of 1.2-1.6 solar mass for the (probably F-type) main-seqence progenitors of CSs. 81 references.« less

  6. Analysis of porosity distribution of large-scale porous media and their reconstruction by Langevin equation.

    PubMed

    Jafari, G Reza; Sahimi, Muhammad; Rasaei, M Reza; Tabar, M Reza Rahimi

    2011-02-01

    Several methods have been developed in the past for analyzing the porosity and other types of well logs for large-scale porous media, such as oil reservoirs, as well as their permeability distributions. We developed a method for analyzing the porosity logs ϕ(h) (where h is the depth) and similar data that are often nonstationary stochastic series. In this method one first generates a new stationary series based on the original data, and then analyzes the resulting series. It is shown that the series based on the successive increments of the log y(h)=ϕ(h+δh)-ϕ(h) is a stationary and Markov process, characterized by a Markov length scale h(M). The coefficients of the Kramers-Moyal expansion for the conditional probability density function (PDF) P(y,h|y(0),h(0)) are then computed. The resulting PDFs satisfy a Fokker-Planck (FP) equation, which is equivalent to a Langevin equation for y(h) that provides probabilistic predictions for the porosity logs. We also show that the Hurst exponent H of the self-affine distributions, which have been used in the past to describe the porosity logs, is directly linked to the drift and diffusion coefficients that we compute for the FP equation. Also computed are the level-crossing probabilities that provide insight into identifying the high or low values of the porosity beyond the depth interval in which the data have been measured. ©2011 American Physical Society

  7. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  8. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  9. Agnostic stacking of intergalactic doublet absorption: measuring the Ne VIII population

    NASA Astrophysics Data System (ADS)

    Frank, Stephan; Pieri, Matthew M.; Mathur, Smita; Danforth, Charles W.; Shull, J. Michael

    2018-05-01

    We present a blind search for doublet intergalactic metal absorption with a method dubbed `agnostic stacking'. Using a forward-modelling framework, we combine this with direct detections in the literature to measure the overall metal population. We apply this novel approach to the search for Ne VIII absorption in a set of 26 high-quality COS spectra. We probe to an unprecedented low limit of log N>12.3 at 0.47≤z ≤1.34 over a path-length Δz = 7.36. This method selects apparent absorption without requiring knowledge of its source. Stacking this mixed population dilutes doublet features in composite spectra in a deterministic manner, allowing us to measure the proportion corresponding to Ne VIII absorption. We stack potential Ne VIII absorption in two regimes: absorption too weak to be significant in direct line studies (12.3 < log N < 13.7), and strong absorbers (log N > 13.7). We do not detect Ne VIII absorption in either regime. Combining our measurements with direct detections, we find that the Ne VIII population is reproduced with a power-law column density distribution function with slope β = -1.86 ^{+0.18 }_{ -0.26} and normalization log f_{13.7} = -13.99 ^{+0.20 }_{ -0.23}, leading to an incidence rate of strong Ne VIII absorbers dn/dz =1.38 ^{+0.97 }_{ -0.82}. We infer a cosmic mass density for Ne VIII gas with 12.3 < log N < 15.0 of Ω _{{{Ne {VIII}}}} = 2.2 ^{+1.6 }_{ _-1.2} × 10^{-8}, a value significantly lower that than predicted by recent simulations. We translate this density into an estimate of the baryon density Ωb ≈ 1.8 × 10-3, constituting 4 per cent of the total baryonic mass.

  10. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  11. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  12. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  13. Mycobacterial Cultures Contain Cell Size and Density Specific Sub-populations of Cells with Significant Differential Susceptibility to Antibiotics, Oxidative and Nitrite Stress

    PubMed Central

    Vijay, Srinivasan; Nair, Rashmi Ravindran; Sharan, Deepti; Jakkala, Kishor; Mukkayyan, Nagaraja; Swaminath, Sharmada; Pradhan, Atul; Joshi, Niranjan V.; Ajitkumar, Parthasarathi

    2017-01-01

    The present study shows the existence of two specific sub-populations of Mycobacterium smegmatis and Mycobacterium tuberculosis cells differing in size and density, in the mid-log phase (MLP) cultures, with significant differential susceptibility to antibiotic, oxidative, and nitrite stress. One of these sub-populations (~10% of the total population), contained short-sized cells (SCs) generated through highly-deviated asymmetric cell division (ACD) of normal/long-sized mother cells and symmetric cell divisions (SCD) of short-sized mother cells. The other sub-population (~90% of the total population) contained normal/long-sized cells (NCs). The SCs were acid-fast stainable and heat-susceptible, and contained high density of membrane vesicles (MVs, known to be lipid-rich) on their surface, while the NCs possessed negligible density of MVs on the surface, as revealed by scanning and transmission electron microscopy. Percoll density gradient fractionation of MLP cultures showed the SCs-enriched fraction (SCF) at lower density (probably indicating lipid-richness) and the NCs-enriched fraction (NCF) at higher density of percoll fractions. While live cell imaging showed that the SCs and the NCs could grow and divide to form colony on agarose pads, the SCF, and NCF cells could independently regenerate MLP populations in liquid and solid media, indicating their full genomic content and population regeneration potential. CFU based assays showed the SCF cells to be significantly more susceptible than NCF cells to a range of concentrations of rifampicin and isoniazid (antibiotic stress), H2O2 (oxidative stress),and acidified NaNO2 (nitrite stress). Live cell imaging showed significantly higher susceptibility of the SCs of SC-NC sister daughter cell pairs, formed from highly-deviated ACD of normal/long-sized mother cells, to rifampicin and H2O2, as compared to the sister daughter NCs, irrespective of their comparable growth rates. The SC-SC sister daughter cell pairs, formed from the SCDs of short-sized mother cells and having comparable growth rates, always showed comparable stress-susceptibility. These observations and the presence of M. tuberculosis SCs and NCs in pulmonary tuberculosis patients' sputum earlier reported by us imply a physiological role for the SCs and the NCs under the stress conditions. The plausible reasons for the higher stress susceptibility of SCs and lower stress susceptibility of NCs are discussed. PMID:28377757

  14. Flame surface statistics of constant-pressure turbulent expanding premixed flames

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2014-04-01

    In this paper we investigate the local flame surface statistics of constant-pressure turbulent expanding flames. First the statistics of local length ratio is experimentally determined from high-speed planar Mie scattering images of spherically expanding flames, with the length ratio on the measurement plane, at predefined equiangular sectors, defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we then convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at the corresponding area-ratio pdfs. It is found that both the length ratio and area ratio pdfs are near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis.

  15. Statistical distribution of building lot frontage: application for Tokyo downtown districts

    NASA Astrophysics Data System (ADS)

    Usui, Hiroyuki

    2018-03-01

    The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.

  16. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  18. Density of Upper Respiratory Colonization With Streptococcus pneumoniae and Its Role in the Diagnosis of Pneumococcal Pneumonia Among Children Aged <5 Years in the PERCH Study

    PubMed Central

    Baggett, Henry C; Watson, Nora L; Deloria Knoll, Maria; Brooks, W Abdullah; Feikin, Daniel R; Hammitt, Laura L; Howie, Stephen R C; Kotloff, Karen L; Levine, Orin S; Madhi, Shabir A; Murdoch, David R; Scott, J Anthony G; Thea, Donald M; Antonio, Martin; Awori, Juliet O; Baillie, Vicky L; DeLuca, Andrea N; Driscoll, Amanda J; Duncan, Julie; Ebruke, Bernard E; Goswami, Doli; Higdon, Melissa M; Karron, Ruth A; Moore, David P; Morpeth, Susan C; Mulindwa, Justin M; Park, Daniel E; Paveenkittiporn, Wantana; Piralam, Barameht; Prosperi, Christine; Sow, Samba O; Tapia, Milagritos D; Zaman, Khalequ; Zeger, Scott L; O’Brien, Katherine L; O, K L; L, O S; K, M D; F, D R; D, A N; D, A J; Fancourt, Nicholas; Fu, Wei; H, L L; H, M M; Wangeci Kagucia, E; K, R A; Li, Mengying; P, D E; P, C; Wu, Zhenke; Z, S L; W, N L; Crawley, Jane; M, D R; B, W A; Endtz, Hubert P; Z, K; G, D; Hossain, Lokman; Jahan, Yasmin; Ashraf, Hasan; C H, S R; E, B E; A, M; McLellan, Jessica; Machuka, Eunice; Shamsul, Arifin; Zaman, Syed M A; Mackenzie, Grant; G S, J A; A, J O; M, S C; Kamau, Alice; Kazungu, Sidi; Ominde, Micah Silaba; K, K L; T, M D; S, S O; Sylla, Mamadou; Tamboura, Boubou; Onwuchekwa, Uma; Kourouma, Nana; Toure, Aliou; M, S A; M, D P; Adrian, Peter V; B, V L; Kuwanda, Locadiah; Mudau, Azwifarwi; Groome, Michelle J; Mahomed, Nasreen; B, H C; Thamthitiwat, Somsak; Maloney, Susan A; Bunthi, Charatdao; Rhodes, Julia; Sawatwong, Pongpun; Akarasewi, Pasakorn; T, D M; Mwananyanda, Lawrence; Chipeta, James; Seidenberg, Phil; Mwansa, James; wa Somwe, Somwe; Kwenda, Geoffrey; Anderson, Trevor P; Mitchell, Joanne

    2017-01-01

    Abstract Background Previous studies suggested an association between upper airway pneumococcal colonization density and pneumococcal pneumonia, but data in children are limited. Using data from the Pneumonia Etiology Research for Child Health (PERCH) study, we assessed this potential association. Methods PERCH is a case-control study in 7 countries: Bangladesh, The Gambia, Kenya, Mali, South Africa, Thailand, and Zambia. Cases were children aged 1–59 months hospitalized with World Health Organization–defined severe or very severe pneumonia. Controls were randomly selected from the community. Microbiologically confirmed pneumococcal pneumonia (MCPP) was confirmed by detection of pneumococcus in a relevant normally sterile body fluid. Colonization density was calculated with quantitative polymerase chain reaction analysis of nasopharyngeal/oropharyngeal specimens. Results Median colonization density among 56 cases with MCPP (MCPP cases; 17.28 × 106 copies/mL) exceeded that of cases without MCPP (non-MCPP cases; 0.75 × 106) and controls (0.60 × 106) (each P < .001). The optimal density for discriminating MCPP cases from controls using the Youden index was >6.9 log10 copies/mL; overall, the sensitivity was 64% and the specificity 92%, with variable performance by site. The threshold was lower (≥4.4 log10 copies/mL) when MCPP cases were distinguished from controls who received antibiotics before specimen collection. Among the 4035 non-MCPP cases, 500 (12%) had pneumococcal colonization density >6.9 log10 copies/mL; above this cutoff was associated with alveolar consolidation at chest radiography, very severe pneumonia, oxygen saturation <92%, C-reactive protein ≥40 mg/L, and lack of antibiotic pretreatment (all P< .001). Conclusions Pneumococcal colonization density >6.9 log10 copies/mL was strongly associated with MCPP and could be used to improve estimates of pneumococcal pneumonia prevalence in childhood pneumonia studies. Our findings do not support its use for individual diagnosis in a clinical setting. PMID:28575365

  19. Pharmacokinetics of differently designed immunoliposome formulations in rats with or without hepatic colon cancer metastases.

    PubMed

    Koning, G A; Morselt, H W; Gorter, A; Allen, T M; Zalipsky, S; Kamps, J A; Scherphof, G L

    2001-09-01

    Compare pharmacokinetics of tumor-directed immunoliposomes in healthy and tumor-bearing rats (hepatic colon cancer metastases). A tumor cell-specific monoclonal antibody was attached to polyethyleneglycol-stabilized liposomes, either in a random orientation via a lipid anchor (MPB-PEG-liposomes) or uniformly oriented at the distal end of the PEG chains (Hz-PEG-liposomes). Pharmacokinetics and tissue distribution were determined using [3H]cholesteryloleylether or bilayer-anchored 5-fluoro[3H]deoxyuridine-dipalmitate ([3H]FUdR-dP) as a marker. In healthy animals clearance of PEG-(immuno)liposomes was almost log-linear and only slightly affected by antibody attachment; in tumor-bearing animals all liposomes displayed biphasic clearance. In normal and tumor animals blood elimination increased with increasing antibody density; particularly for the Hz-PEG-liposomes, and was accompanied by increased hepatic uptake, probably due to increased numbers of macrophages induced by tumor growth. The presence of antibodies on the liposomes enhanced tumor accumulation: uptake per gram tumor tissue (2-4% of dose) was similar to that of liver. Remarkably, this applied to tumor-specific and irrelevant antibody. Increased immunoliposome uptake by trypsin-treated Kupffer cells implicated involvement of high-affinity Fc-receptors on activated macrophages. Tumor growth and immunoliposome characteristics (antibody density and orientation) determine immunoliposome pharmacokinetics. Although with a long-circulating immunoliposome formulation, efficiently retaining the prodrug FUdR-dP, we achieved enhanced uptake by hepatic metastases, this was probably not mediated by specific interaction with the tumor cells, but rather by tumor-associated macrophages.

  20. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.

  1. Inference of strata separation and gas emission paths in longwall overburden using continuous wavelet transform of well logs and geostatistical simulation

    NASA Astrophysics Data System (ADS)

    Karacan, C. Özgen; Olea, Ricardo A.

    2014-06-01

    Prediction of potential methane emission pathways from various sources into active mine workings or sealed gobs from longwall overburden is important for controlling methane and for improving mining safety. The aim of this paper is to infer strata separation intervals and thus gas emission pathways from standard well log data. The proposed technique was applied to well logs acquired through the Mary Lee/Blue Creek coal seam of the Upper Pottsville Formation in the Black Warrior Basin, Alabama, using well logs from a series of boreholes aligned along a nearly linear profile. For this purpose, continuous wavelet transform (CWT) of digitized gamma well logs was performed by using Mexican hat and Morlet, as the mother wavelets, to identify potential discontinuities in the signal. Pointwise Hölder exponents (PHE) of gamma logs were also computed using the generalized quadratic variations (GQV) method to identify the location and strength of singularities of well log signals as a complementary analysis. PHEs and wavelet coefficients were analyzed to find the locations of singularities along the logs. Using the well logs in this study, locations of predicted singularities were used as indicators in single normal equation simulation (SNESIM) to generate equi-probable realizations of potential strata separation intervals. Horizontal and vertical variograms of realizations were then analyzed and compared with those of indicator data and training image (TI) data using the Kruskal-Wallis test. A sum of squared differences was employed to select the most probable realization representing the locations of potential strata separations and methane flow paths. Results indicated that singularities located in well log signals reliably correlated with strata transitions or discontinuities within the strata. Geostatistical simulation of these discontinuities provided information about the location and extents of the continuous channels that may form during mining. If there is a gas source within their zone of influence, paths may develop and allow methane movement towards sealed or active gobs under pressure differentials. Knowledge gained from this research will better prepare mine operations for potential methane inflows, thus improving mine safety.

  2. Channel simulation for direct detection optical communication systems

    NASA Technical Reports Server (NTRS)

    Tycz, M.; Fitzmaurice, M. W.

    1974-01-01

    A technique is described for simulating the random modulation imposed by atmospheric scintillation and transmitter pointing jitter on a direct detection optical communication system. The system is capable of providing signal fading statistics which obey log normal, beta, Rayleigh, Ricean or chi-squared density functions. Experimental tests of the performance of the Channel Simulator are presented.

  3. Channel simulation for direct-detection optical communication systems

    NASA Technical Reports Server (NTRS)

    Tycz, M.; Fitzmaurice, M. W.

    1974-01-01

    A technique is described for simulating the random modulation imposed by atmospheric scintillation and transmitter pointing jitter on a direct-detection optical communication system. The system is capable of providing signal fading statistics which obey log-normal, beta, Rayleigh, Ricean, or chi-square density functions. Experimental tests of the performance of the channel simulator are presented.

  4. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  5. Stochastic Modeling Approach to the Incubation Time of Prionic Diseases

    NASA Astrophysics Data System (ADS)

    Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.

    2003-05-01

    Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.

  6. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  7. Convection due to an unstable density difference across a permeable membrane

    NASA Astrophysics Data System (ADS)

    Puthenveettil, Baburaj A.; Arakeri, Jaywant H.

    We study natural convection driven by unstable concentration differences of sodium chloride (NaCl) across a horizontal permeable membrane at Rayleigh numbers (Ra) of 1010 to 1011 and Schmidt number (Sc)=600. A layer of brine lies over a layer of distilled water, separated by the membrane, in square-cross-section tanks. The membrane is permeable enough to allow a small flow across it at higher driving potentials. Based on the predominant mode of transport across the membrane, three regimes of convection, namely an advection regime, a diffusion regime and a combined regime, are identified. The near-membrane flow in all the regimes consists of sheet plumes formed from the unstable layers of fluid near the membrane. In the advection regime observed at higher concentration differences (Bb) show a common log-normal probability density function at all Ra. We propose a phenomenology which predicts /line{lambda}_b sqrt{Z_w Z_{V_i}}, where Zw and Z_{V_i} are, respectively, the near-wall length scales in Rayleighnard convection (RBC) and due to the advection velocity. In the combined regime, which occurs at intermediate values of C/2)4/3. At lower driving potentials, in the diffusion regime, the flux scaling is similar to that in turbulent RBC.

  8. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  9. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  10. An adaptive density-based routing protocol for flying Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xueli; Qi, Qian; Wang, Qingwen; Li, Yongqiang

    2017-10-01

    An Adaptive Density-based Routing Protocol (ADRP) for Flying Ad Hoc Networks (FANETs) is proposed in this paper. The main objective is to calculate forwarding probability adaptively in order to increase the efficiency of forwarding in FANETs. ADRP dynamically fine-tunes the rebroadcasting probability of a node for routing request packets according to the number of neighbour nodes. Indeed, it is more interesting to privilege the retransmission by nodes with little neighbour nodes. We describe the protocol, implement it and evaluate its performance using NS-2 network simulator. Simulation results reveal that ADRP achieves better performance in terms of the packet delivery fraction, average end-to-end delay, normalized routing load, normalized MAC load and throughput, which is respectively compared with AODV.

  11. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  12. Density and velocity relationships for digital sonic and density logs from coastal Washington and laboratory measurements of Olympic Peninsula mafic rocks and greywackes

    USGS Publications Warehouse

    Brocher, Thomas M.; Christensen, Nikolas I.

    2001-01-01

    Three-dimensional velocity models for the basins along the coast of Washington and in Puget Lowland provide a means for better understanding the lateral variations in strong ground motions recorded there. We have compiled 16 sonic and 18 density logs from 22 oil test wells to help us determine the geometry and physical properties of the Cenozoic basins along coastal Washington. The depth ranges sampled by the test-well logs fall between 0.3 and 2.1 km. These well logs sample Quaternary to middle Eocene sedimentary rocks of the Quinault Formation, Montesano Formation, and Hoh rock assemblage. Most (18 or 82%) of the wells are from Grays Harbor County, and many of these are from the Ocean City area. These Grays Harbor County wells sample the Quinault Formation, Montesano Formation, and frequently bottom in the Hoh rock assemblage. These wells show that the sonic velocity and density normally increase significantly across the contacts between the Quinault or the Montesano Formations and the Hoh rock assemblage. Reflection coefficients calculated for vertically traveling compressional waves from the average velocities and densities for these units suggest that the top of the Hoh rock assemblage is a strong reflector of downward-propagating seismic waves: these reflection coefficients lie between 11 and 20%. Thus, this boundary may reflect seismic energy upward and trap a substantial portion of the seismic energy generated by future earthquakes within the Miocene and younger sedimentary basins found along the Washington coast. Three wells from Jefferson County provide data for the Hoh rock assemblage for the entire length of the logs. One well (Eastern Petroleum Sniffer Forks #1), from the Forks area in Clallam County, also exclusively samples the Hoh rock assemblage. This report presents the locations, elevations, depths, stratigraphic, and other information for all the oil test wells, and provides plots showing the density and sonic velocities as a function of depth for each well log. We also present two-way traveltimes for 15 of the wells calculated from the sonic velocities. Average velocities and densities for the wells having both logs can be reasonably well related using a modified Gardner’s rule, with p=1825v1/4, where p is the density (in kg/m3) and v is the sonic velocity (in km/s). In contrast, a similar analysis of published well logs from Puget Lowland is best matched by a Gardner’s rule of p=1730v1/4, close to the p=1740v1/4 proposed by Gardner et al. (1974). Finally, we present laboratory measurements of compressional-wave velocity, shear-wave velocity, and density for 11 greywackes and 29 mafic rocks from the Olympic Peninsula and Puget Lowland. These units have significance for earthquake-hazard investigations in Puget Lowland as they dip eastward beneath the Lowland, forming the “bedrock” beneath much of the lowland. Average Vp/Vs ratios for the mafic rocks, mainly Crescent Formation volcanics, lie between 1.81 and 1.86. Average Vp/Vs ratios for the greywackes from the accretionary core complex in the Olympic Peninsula show greater scatter but lie between 1.77 and 1.88. Both the Olympic Peninsula mafic rocks and greywackes have lower shear-wave velocities than would be expected for a Poisson solid (Vp/Vs=1.732). Although the P-wave velocities and densities in the greywackes can be related by a Gardner’s rule of p=1720v1/4, close to the p=1740v1/4 proposed by Gardner et al. (1974), the velocities and densities of the mafic rocks are best related by a Gardner’s rule of p=1840v1/4. Thus, the density/velocity relations are similar for the Puget Lowland well logs and greywackes from the Olympic Peninsula. Density/velocity relations are similar for the Washington coastal well logs and mafic rocks from the Olympic Peninsula, but differ from those of the Puget Lowland well logs and greywackes from the Olympic Peninsula.

  13. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  14. Spectral Density of Laser Beam Scintillation in Wind Turbulence. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1997-01-01

    The temporal spectral density of the log-amplitude scintillation of a laser beam wave due to a spatially dependent vector-valued crosswind (deterministic as well as random) is evaluated. The path weighting functions for normalized spectral moments are derived, and offer a potential new technique for estimating the wind velocity profile. The Tatarskii-Klyatskin stochastic propagation equation for the Markov turbulence model is used with the solution approximated by the Rytov method. The Taylor 'frozen-in' hypothesis is assumed for the dependence of the refractive index on the wind velocity, and the Kolmogorov spectral density is used for the refractive index field.

  15. Statistical analysis of variability properties of the Kepler blazar W2R 1926+42

    NASA Astrophysics Data System (ADS)

    Li, Yutong; Hu, Shaoming; Wiita, Paul J.; Gupta, Alok C.

    2018-04-01

    We analyzed Kepler light curves of the blazar W2R 1926+42 that provided nearly continuous coverage from quarter 11 through quarter 17 (589 days between 2011 and 2013) and examined some of their flux variability properties. We investigate the possibility that the light curve is dominated by a large number of individual flares and adopt exponential rise and decay models to investigate the symmetry properties of flares. We found that those variations of W2R 1926+42 are predominantly asymmetric with weak tendencies toward positive asymmetry (rapid rise and slow decay). The durations (D) and the amplitudes (F0) of flares can be fit with log-normal distributions. The energy (E) of each flare is also estimated for the first time. There are positive correlations between logD and logE with a slope of 1.36, and between logF0 and logE with a slope of 1.12. Lomb-Scargle periodograms are used to estimate the power spectral density (PSD) shape. It is well described by a power law with an index ranging between -1.1 and -1.5. The sizes of the emission regions, R, are estimated to be in the range of 1.1 × 1015cm - 6.6 × 1016cm. The flare asymmetry is difficult to explain by a light travel time effect but may be caused by differences between the timescales for acceleration and dissipation of high-energy particles in the relativistic jet. A jet-in-jet model also could produce the observed log-normal distributions.

  16. On the asymptotic improvement of supervised learning by utilizing additional unlabeled samples - Normal mixture density case

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The effect of additional unlabeled samples in improving the supervised learning process is studied in this paper. Three learning processes. supervised, unsupervised, and combined supervised-unsupervised, are compared by studying the asymptotic behavior of the estimates obtained under each process. Upper and lower bounds on the asymptotic covariance matrices are derived. It is shown that under a normal mixture density assumption for the probability density function of the feature space, the combined supervised-unsupervised learning is always superior to the supervised learning in achieving better estimates. Experimental results are provided to verify the theoretical concepts.

  17. Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables▿

    PubMed Central

    Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric

    2007-01-01

    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926

  18. Usefulness of the novel risk estimation software, Heart Risk View, for the prediction of cardiac events in patients with normal myocardial perfusion SPECT.

    PubMed

    Sakatani, Tomohiko; Shimoo, Satoshi; Takamatsu, Kazuaki; Kyodo, Atsushi; Tsuji, Yumika; Mera, Kayoko; Koide, Masahiro; Isodono, Koji; Tsubakimoto, Yoshinori; Matsuo, Akiko; Inoue, Keiji; Fujita, Hiroshi

    2016-12-01

    Myocardial perfusion single-photon emission-computed tomography (SPECT) can predict cardiac events in patients with coronary artery disease with high accuracy; however, pseudo-negative cases sometimes occur. Heart Risk View, which is based on the prospective cohort study (J-ACCESS), is a software for evaluating cardiac event probability. We examined whether Heart Risk View was useful to evaluate the cardiac risk in patients with normal myocardial perfusion SPECT (MPS). We studied 3461 consecutive patients who underwent MPS to detect myocardial ischemia and those who had normal MPS were enrolled in this study (n = 698). We calculated cardiac event probability by Heart Risk View and followed-up for 3.8 ± 2.4 years. The cardiac events were defined as cardiac death, non-fatal myocardial infarction, and heart failure requiring hospitalization. During the follow-up period, 21 patients (3.0 %) had cardiac events. The event probability calculated by Heart Risk View was higher in the event group (5.5 ± 2.6 vs. 2.9 ± 2.6 %, p < 0.001). According to the receiver-operating characteristics curve, the cut-off point of the event probability for predicting cardiac events was 3.4 % (sensitivity 0.76, specificity 0.72, and AUC 0.85). Kaplan-Meier curves revealed that a higher event rate was observed in the high-event probability group by the log-rank test (p < 0.001). Although myocardial perfusion SPECT is useful for the prediction of cardiac events, risk estimation by Heart Risk View adds more prognostic information, especially in patients with normal MPS.

  19. Density structure of submarine slump and normal sediments of the first gas production test site at Daini-Atsumi Knoll near Nankai Trough, estimated by LWD logging data

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Takayama, T.; Fujii, T.; Yamamoto, K.

    2014-12-01

    Many geologists have discussed slope instability caused by gas-hydrate dissociation, which could make movable fluid in pore space of sediments. However, physical property changes caused by gas hydrate dissociation would not be so simple. Moreover, during the period of natural gas-production from gas-hydrate reservoir applying depressurization method would be completely different phenomena from dissociation processes in nature, because it could not be caused excess pore pressure, even though gas and water exist. Hence, in all cases, physical properties of gas-hydrate bearing sediments and that of their cover sediments are quite important to consider this phenomena, and to carry out simulation to solve focusing phenomena during gas hydrate dissociation periods. Daini-Atsumi knoll that was the first offshore gas-production test site from gas-hydrate is partially covered by slumps. Fortunately, one of them was penetrated by both Logging-While-Drilling (LWD) hole and pressure-coring hole. As a result of LWD data analyses and core analyses, we have understood density structure of sediments from seafloor to Bottom Simulating Reflector (BSR). The results are mentioned as following. ・Semi-confined slump showed high-density, relatively. It would be explained by over-consolidation that was result of layer-parallel compression caused by slumping. ・Bottom sequence of slump has relative high-density zones. It would be explained by shear-induced compaction along slide plane. ・Density below slump tends to increase in depth. It is reasonable that sediments below slump deposit have been compacting as normal consolidation. ・Several kinds of log-data for estimating physical properties of gas-hydrate reservoir sediments have been obtained. It will be useful for geological model construction from seafloor until BSR. We can use these results to consider geological model not only for slope instability at slumping, but also for slope stability during depressurized period of gas production from gas-hydrate. AcknowledgementThis study was supported by funding from the Research Consortium for Methane Hydrate Resources in Japan (MH21 Research Consortium) planned by the Ministry of Economy, Trade and Industry (METI).

  20. Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu

    2017-05-01

    The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.

  1. Joint Stochastic Inversion of Pre-Stack 3D Seismic Data and Well Logs for High Resolution Hydrocarbon Reservoir Characterization

    NASA Astrophysics Data System (ADS)

    Torres-Verdin, C.

    2007-05-01

    This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.

  2. Analyzing the propagation behavior of scintillation index and bit error rate of a partially coherent flat-topped laser beam in oceanic turbulence.

    PubMed

    Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh

    2015-11-01

    In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.

  3. Density of Upper Respiratory Colonization With Streptococcus pneumoniae and Its Role in the Diagnosis of Pneumococcal Pneumonia Among Children Aged <5 Years in the PERCH Study.

    PubMed

    Baggett, Henry C; Watson, Nora L; Deloria Knoll, Maria; Brooks, W Abdullah; Feikin, Daniel R; Hammitt, Laura L; Howie, Stephen R C; Kotloff, Karen L; Levine, Orin S; Madhi, Shabir A; Murdoch, David R; Scott, J Anthony G; Thea, Donald M; Antonio, Martin; Awori, Juliet O; Baillie, Vicky L; DeLuca, Andrea N; Driscoll, Amanda J; Duncan, Julie; Ebruke, Bernard E; Goswami, Doli; Higdon, Melissa M; Karron, Ruth A; Moore, David P; Morpeth, Susan C; Mulindwa, Justin M; Park, Daniel E; Paveenkittiporn, Wantana; Piralam, Barameht; Prosperi, Christine; Sow, Samba O; Tapia, Milagritos D; Zaman, Khalequ; Zeger, Scott L; O'Brien, Katherine L

    2017-06-15

    Previous studies suggested an association between upper airway pneumococcal colonization density and pneumococcal pneumonia, but data in children are limited. Using data from the Pneumonia Etiology Research for Child Health (PERCH) study, we assessed this potential association. PERCH is a case-control study in 7 countries: Bangladesh, The Gambia, Kenya, Mali, South Africa, Thailand, and Zambia. Cases were children aged 1-59 months hospitalized with World Health Organization-defined severe or very severe pneumonia. Controls were randomly selected from the community. Microbiologically confirmed pneumococcal pneumonia (MCPP) was confirmed by detection of pneumococcus in a relevant normally sterile body fluid. Colonization density was calculated with quantitative polymerase chain reaction analysis of nasopharyngeal/oropharyngeal specimens. Median colonization density among 56 cases with MCPP (MCPP cases; 17.28 × 106 copies/mL) exceeded that of cases without MCPP (non-MCPP cases; 0.75 × 106) and controls (0.60 × 106) (each P < .001). The optimal density for discriminating MCPP cases from controls using the Youden index was >6.9 log10 copies/mL; overall, the sensitivity was 64% and the specificity 92%, with variable performance by site. The threshold was lower (≥4.4 log10 copies/mL) when MCPP cases were distinguished from controls who received antibiotics before specimen collection. Among the 4035 non-MCPP cases, 500 (12%) had pneumococcal colonization density >6.9 log10 copies/mL; above this cutoff was associated with alveolar consolidation at chest radiography, very severe pneumonia, oxygen saturation <92%, C-reactive protein ≥40 mg/L, and lack of antibiotic pretreatment (all P< .001). Pneumococcal colonization density >6.9 log10 copies/mL was strongly associated with MCPP and could be used to improve estimates of pneumococcal pneumonia prevalence in childhood pneumonia studies. Our findings do not support its use for individual diagnosis in a clinical setting. Published by Oxford University Press for the Infectious Diseases Society of America 2017.This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.

    PubMed

    Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P

    2011-09-01

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Measurement of the distribution of ventilation-perfusion ratios in the human lung with proton MRI: comparison with the multiple inert-gas elimination technique.

    PubMed

    Sá, Rui Carlos; Henderson, A Cortney; Simonson, Tatum; Arai, Tatsuya J; Wagner, Harrieth; Theilmann, Rebecca J; Wagner, Peter D; Prisk, G Kim; Hopkins, Susan R

    2017-07-01

    We have developed a novel functional proton magnetic resonance imaging (MRI) technique to measure regional ventilation-perfusion (V̇ A /Q̇) ratio in the lung. We conducted a comparison study of this technique in healthy subjects ( n = 7, age = 42 ± 16 yr, Forced expiratory volume in 1 s = 94% predicted), by comparing data measured using MRI to that obtained from the multiple inert gas elimination technique (MIGET). Regional ventilation measured in a sagittal lung slice using Specific Ventilation Imaging was combined with proton density measured using a fast gradient-echo sequence to calculate regional alveolar ventilation, registered with perfusion images acquired using arterial spin labeling, and divided on a voxel-by-voxel basis to obtain regional V̇ A /Q̇ ratio. LogSDV̇ and LogSDQ̇, measures of heterogeneity derived from the standard deviation (log scale) of the ventilation and perfusion vs. V̇ A /Q̇ ratio histograms respectively, were calculated. On a separate day, subjects underwent study with MIGET and LogSDV̇ and LogSDQ̇ were calculated from MIGET data using the 50-compartment model. MIGET LogSDV̇ and LogSDQ̇ were normal in all subjects. LogSDQ̇ was highly correlated between MRI and MIGET (R = 0.89, P = 0.007); the intercept was not significantly different from zero (-0.062, P = 0.65) and the slope did not significantly differ from identity (1.29, P = 0.34). MIGET and MRI measures of LogSDV̇ were well correlated (R = 0.83, P = 0.02); the intercept differed from zero (0.20, P = 0.04) and the slope deviated from the line of identity (0.52, P = 0.01). We conclude that in normal subjects, there is a reasonable agreement between MIGET measures of heterogeneity and those from proton MRI measured in a single slice of lung. NEW & NOTEWORTHY We report a comparison of a new proton MRI technique to measure regional V̇ A /Q̇ ratio against the multiple inert gas elimination technique (MIGET). The study reports good relationships between measures of heterogeneity derived from MIGET and those derived from MRI. Although currently limited to a single slice acquisition, these data suggest that single sagittal slice measures of V̇ A /Q̇ ratio provide an adequate means to assess heterogeneity in the normal lung. Copyright © 2017 the American Physiological Society.

  6. Evaluation of waste mushroom logs as a potential biomass resource for the production of bioethanol.

    PubMed

    Lee, Jae-Won; Koo, Bon-Wook; Choi, Joon-Weon; Choi, Don-Ha; Choi, In-Gyu

    2008-05-01

    In order to investigate the possibility of using waste mushroom logs as a biomass resource for alternative energy production, the chemical and physical characteristics of normal wood and waste mushroom logs were examined. Size reduction of normal wood (145 kW h/tone) required significantly higher energy consumption than waste mushroom logs (70 kW h/tone). The crystallinity value of waste mushroom logs was dramatically lower (33%) than normal wood (49%) after cultivation by Lentinus edodes as spawn. Lignin, an enzymatic hydrolysis inhibitor in sugar production, decreased from 21.07% to 18.78% after inoculation of L. edodes. Total sugar yields obtained by enzyme and acid hydrolysis were higher in waste mushroom logs than in normal wood. After 24h fermentation, 12 g/L ethanol was produced on waste mushroom logs, while normal wood produced 8 g/L ethanol. These results indicate that waste mushroom logs are economically suitable lignocellulosic material for the production of fermentable sugars related to bioethanol production.

  7. Jimsphere wind and turbulence exceedance statistic

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.; Court, A.

    1972-01-01

    Exceedance statistics of winds and gusts observed over Cape Kennedy with Jimsphere balloon sensors are described. Gust profiles containing positive and negative departures, from smoothed profiles, in the wavelength ranges 100-2500, 100-1900, 100-860, and 100-460 meters were computed from 1578 profiles with four 41 weight digital high pass filters. Extreme values of the square root of gust speed are normally distributed. Monthly and annual exceedance probability distributions of normalized rms gust speeds in three altitude bands (2-7, 6-11, and 9-14 km) are log-normal. The rms gust speeds are largest in the 100-2500 wavelength band between 9 and 14 km in late winter and early spring. A study of monthly and annual exceedance probabilities and the number of occurrences per kilometer of level crossings with positive slope indicates significant variability with season, altitude, and filter configuration. A decile sampling scheme is tested and an optimum approach is suggested for drawing a relatively small random sample that represents the characteristic extreme wind speeds and shears of a large parent population of Jimsphere wind profiles.

  8. The global star formation law of galaxies revisited in the radio continuum

    NASA Astrophysics Data System (ADS)

    Liu, LiJie; Gao, Yu

    2012-02-01

    We study the global star formation law, the relation between the gas and star formation rate (SFR) in a sample of 130 local galaxies with infrared (IR) luminosities spanning over three orders of magnitude (109-1012 L⊙), which includes 91 normal spiral galaxies and 39 (ultra)luminous IR galaxies [(U)LIRGs]. We derive their total (atomic and molecular) gas and dense molecular gas masses using newly available HI, CO and HCN data from the literature. The SFR of galaxies is determined from total IR (8-1000 μm) and 1.4 GHz radio continuum (RC) luminosities. The galaxy disk sizes are defined by the de-convolved elliptical Gaussian FWHM of the RC maps. We derive the galaxy disk-averaged SFRs and various gas surface densities, and investigate their relationships. We find that the galaxy disk-averaged surface density of dense molecular gas mass has the tightest correlation with that of SFR (scatter ˜0.26 dex), and is linear in log-log space (power-law slope of N=1.03±0.02) across the full galaxy sample. The correlation between the total gas and SFR surface densities for the full sample has a somewhat larger scatter (˜0.48 dex), and is best fit by a power-law with slope 1.45±0.02. However, the slope changes from ˜1 when only normal spirals are considered, to ˜1.5 when more and more (U)LIRGs are included in the fitting. When different CO-to-H2 conversion factors are used to infer molecular gas masses for normal galaxies and (U)LIRGs, the bi-modal relations claimed recently in CO observations of high-redshift galaxies appear to also exist in local populations of star-forming galaxies.

  9. Computer analysis of digital well logs

    USGS Publications Warehouse

    Scott, James H.

    1984-01-01

    A comprehensive system of computer programs has been developed by the U.S. Geological Survey for analyzing digital well logs. The programs are operational on a minicomputer in a research well-logging truck, making it possible to analyze and replot the logs while at the field site. The minicomputer also serves as a controller of digitizers, counters, and recorders during acquisition of well logs. The analytical programs are coordinated with the data acquisition programs in a flexible system that allows the operator to make changes quickly and easily in program variables such as calibration coefficients, measurement units, and plotting scales. The programs are designed to analyze the following well-logging measurements: natural gamma-ray, neutron-neutron, dual-detector density with caliper, magnetic susceptibility, single-point resistance, self potential, resistivity (normal and Wenner configurations), induced polarization, temperature, sonic delta-t, and sonic amplitude. The computer programs are designed to make basic corrections for depth displacements, tool response characteristics, hole diameter, and borehole fluid effects (when applicable). Corrected well-log measurements are output to magnetic tape or plotter with measurement units transformed to petrophysical and chemical units of interest, such as grade of uranium mineralization in percent eU3O8, neutron porosity index in percent, and sonic velocity in kilometers per second.

  10. Universal noise and Efimov physics

    NASA Astrophysics Data System (ADS)

    Nicholson, Amy N.

    2016-03-01

    Probability distributions for correlation functions of particles interacting via random-valued fields are discussed as a novel tool for determining the spectrum of a theory. In particular, this method is used to determine the energies of universal N-body clusters tied to Efimov trimers, for even N, by investigating the distribution of a correlation function of two particles at unitarity. Using numerical evidence that this distribution is log-normal, an analytical prediction for the N-dependence of the N-body binding energies is made.

  11. Proton Straggling in Thick Silicon Detectors

    NASA Technical Reports Server (NTRS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.

    2017-01-01

    Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.

  12. Increasing market efficiency in the stock markets

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook

    2008-01-01

    We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.

  13. Field test results--a new logging tool for formation density and lithology measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borai, A.M.; Muhsin, M.A.

    1983-03-01

    The formation porosity can be determined from borehole density measurements if the density of the rock is known. Generally, this is determined from the lithology. The Litho-Density Tool, LDT, provides an improved measurement of the formation density and a new measurement of lithology. Field tests of LDT proved that the tool could be run alone in a wide range of formations to provide porosity values comparable to those obtained by running a density log combined with a neutron log.

  14. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  15. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  16. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  17. MODELING THE ANOMALY OF SURFACE NUMBER DENSITIES OF GALAXIES ON THE GALACTIC EXTINCTION MAP DUE TO THEIR FIR EMISSION CONTAMINATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashiwagi, Toshiya; Suto, Yasushi; Taruya, Atsushi

    The most widely used Galactic extinction map is constructed assuming that the observed far-infrared (FIR) fluxes come entirely from Galactic dust. According to the earlier suggestion by Yahata et al., we consider how FIR emission of galaxies affects the SFD map. We first compute the surface number density of Sloan Digital Sky Survey (SDSS) DR7 galaxies as a function of the r-band extinction, A {sub r,} {sub SFD}. We confirm that the surface densities of those galaxies positively correlate with A {sub r,} {sub SFD} for A {sub r,} {sub SFD} < 0.1, as first discovered by Yahata et al.more » for SDSS DR4 galaxies. Next we construct an analytical model to compute the surface density of galaxies, taking into account the contamination of their FIR emission. We adopt a log-normal probability distribution for the ratio of 100 μm and r-band luminosities of each galaxy, y ≡ (νL){sub 100} {sub μm}/(νL) {sub r}. Then we search for the mean and rms values of y that fit the observed anomaly, using the analytical model. The required values to reproduce the anomaly are roughly consistent with those measured from the stacking analysis of SDSS galaxies. Due to the limitation of our statistical modeling, we are not yet able to remove the FIR contamination of galaxies from the extinction map. Nevertheless, the agreement with the model prediction suggests that the FIR emission of galaxies is mainly responsible for the observed anomaly. Whereas the corresponding systematic error in the Galactic extinction map is 0.1-1 mmag, it is directly correlated with galaxy clustering and thus needs to be carefully examined in precision cosmology.« less

  18. HIGH STAR FORMATION RATES IN TURBULENT ATOMIC-DOMINATED GAS IN THE INTERACTING GALAXIES IC 2163 AND NGC 2207

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmegreen, Bruce G.; Kaufman, Michele; Bournaud, Frédéric

    CO observations of the interacting galaxies IC 2163 and NGC 2207 are combined with HI, H α , and 24 μ m observations to study the star formation rate (SFR) surface density as a function of the gas surface density. More than half of the high-SFR regions are HI dominated. When compared to other galaxies, these HI-dominated regions have excess SFRs relative to their molecular gas surface densities but normal SFRs relative to their total gas surface densities. The HI-dominated regions are mostly located in the outer part of NGC 2207 where the HI velocity dispersion is high, 40–50 kmmore » s{sup −1}. We suggest that the star-forming clouds in these regions have envelopes at lower densities than normal, making them predominantly atomic, and cores at higher densities than normal because of the high turbulent Mach numbers. This is consistent with theoretical predictions of a flattening in the density probability distribution function for compressive, high Mach number turbulence.« less

  19. Inference with minimal Gibbs free energy in information field theory.

    PubMed

    Ensslin, Torsten A; Weig, Cornelius

    2010-11-01

    Non-linear and non-gaussian signal inference problems are difficult to tackle. Renormalization techniques permit us to construct good estimators for the posterior signal mean within information field theory (IFT), but the approximations and assumptions made are not very obvious. Here we introduce the simple concept of minimal Gibbs free energy to IFT, and show that previous renormalization results emerge naturally. They can be understood as being the gaussian approximation to the full posterior probability, which has maximal cross information with it. We derive optimized estimators for three applications, to illustrate the usage of the framework: (i) reconstruction of a log-normal signal from poissonian data with background counts and point spread function, as it is needed for gamma ray astronomy and for cosmography using photometric galaxy redshifts, (ii) inference of a gaussian signal with unknown spectrum, and (iii) inference of a poissonian log-normal signal with unknown spectrum, the combination of (i) and (ii). Finally we explain how gaussian knowledge states constructed by the minimal Gibbs free energy principle at different temperatures can be combined into a more accurate surrogate of the non-gaussian posterior.

  20. A method of estimating log weights.

    Treesearch

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  1. Measuring colour rivalry suppression in amblyopia.

    PubMed

    Hofeldt, T S; Hofeldt, A J

    1999-11-01

    To determine if the colour rivalry suppression is an index of the visual impairment in amblyopia and if the stereopsis and fusion evaluator (SAFE) instrument is a reliable indicator of the difference in visual input from the two eyes. To test the accuracy of the SAFE instrument for measuring the visual input from the two eyes, colour rivalry suppression was measured in six normal subjects. A test neutral density filter (NDF) was placed before one eye to induce a temporary relative afferent defect and the subject selected the NDF before the fellow eye to neutralise the test NDF. In a non-paediatric private practice, 24 consecutive patients diagnosed with unilateral amblyopia were tested with the SAFE. Of the 24 amblyopes, 14 qualified for the study because they were able to fuse images and had no comorbid disease. The relation between depth of colour rivalry suppression, stereoacuity, and interocular difference in logMAR acuity was analysed. In normal subjects, the SAFE instrument reversed temporary defects of 0.3 to 1. 8 log units to within 0.6 log units. In amblyopes, the NDF to reverse colour rivalry suppression was positively related to interocular difference in logMAR acuity (beta=1.21, p<0.0001), and negatively related to stereoacuity (beta=-0.16, p=0.019). The interocular difference in logMAR acuity was negatively related to stereoacuity (beta=-0.13, p=0.009). Colour rivalry suppression as measured with the SAFE was found to agree closely with the degree of visual acuity impairment in non-paediatric patients with amblyopia.

  2. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  3. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  4. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  5. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    PubMed

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  6. Estimating Effective Seismic Anisotropy Of Coal Seam Gas Reservoirs from Sonic Log Data Using Orthorhombic Buckus-style Upscaling

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Tyson, Stephen

    2015-04-01

    Fracture density and orientation are key parameters controlling productivity of coal seam gas reservoirs. Seismic anisotropy can help to identify and quantify fracture characteristics. In particular, wide offset and dense azimuthal coverage land seismic recordings offers the opportunity for recovery of anisotropy parameters. In many coal seam gas reservoirs (eg. Walloon Subgroup in the Surat Basin, Queensland, Australia (Esterle et al. 2013)) the thickness of coal-beds and interbeds (e.g mud-stone) are well below the seismic wave length (0.3-1m versus 5-15m). In these situations, the observed seismic anisotropy parameters represent effective elastic properties of the composite media formed of fractured, anisotropic coal and isotropic interbed. As a consequence observed seismic anisotropy cannot directly be linked to fracture characteristics but requires a more careful interpretation. In the paper we will discuss techniques to estimate effective seismic anisotropy parameters from well log data with the objective to improve the interpretation for the case of layered thin coal beds. In the first step we use sonic log data to reconstruct the elasticity parameters as function of depth (at the resolution of the sonic log). It is assumed that within a sample fractures are sparse, of the same size and orientation, penny-shaped and equally spaced. Following classical fracture model this can be modeled as an elastic horizontally transversely isotropic (HTI) media (Schoenberg & Sayers 1995). Under the additional assumption of dry fractures, normal and tangential fracture weakness is estimated from slow and fast shear wave velocities of the sonic log. In the second step we apply Backus-style upscaling to construct effective anisotropy parameters on an appropriate length scale. In order to honor the HTI anisotropy present at each layer we have developed a new extension of the classical Backus averaging for layered isotropic media (Backus 1962) . Our new method assumes layered HTI media with constant anisotropy orientation as recovered in the first step. It leads to an effective horizontal orthorhombic elastic model. From this model Thomsen-style anisotropy parameters are calculated to derive azimuth-dependent normal move out (NMO) velocities (see Grechka & Tsvankin 1998). In our presentation we will show results of our approach from sonic well logs in the Surat Basin to investigate the potential of reconstructing S-wave velocity anisotropy and fracture density from azimuth dependent NMO velocities profiles.

  7. Upland log volumes and conifer establishment patterns in two northern, upland old-growth redwood forests, a brief synopsis

    Treesearch

    Daniel J. Porter; John O. Sawyer

    2007-01-01

    We characterized the volume, weight and top surface area of naturally fallen logs in an old-growth redwood forest, and quantified conifer recruit densities on these logs and on the surrounding forest floor. We report significantly greater conifer recruit densities on log substrates as compared to the forest floor. Log substrate availability was calculated on a per...

  8. Improved grading system for structural logs for log homes

    Treesearch

    D.W. Green; T.M. Gorman; J.W. Evans; J.F. Murphy

    2004-01-01

    Current grading standards for logs used in log home construction use visual criteria to sort logs into either “wall logs” or structural logs (round and sawn round timbers). The conservative nature of this grading system, and the grouping of stronger and weaker species for marketing purposes, probably results in the specification of logs with larger diameter than would...

  9. Large scale IRAM 30 m CO-observations in the giant molecular cloud complex W43

    NASA Astrophysics Data System (ADS)

    Carlhoff, P.; Nguyen Luong, Q.; Schilke, P.; Motte, F.; Schneider, N.; Beuther, H.; Bontemps, S.; Heitsch, F.; Hill, T.; Kramer, C.; Ossenkopf, V.; Schuller, F.; Simon, R.; Wyrowski, F.

    2013-12-01

    We aim to fully describe the distribution and location of dense molecular clouds in the giant molecular cloud complex W43. It was previously identified as one of the most massive star-forming regions in our Galaxy. To trace the moderately dense molecular clouds in the W43 region, we initiated W43-HERO, a large program using the IRAM 30 m telescope, which covers a wide dynamic range of scales from 0.3 to 140 pc. We obtained on-the-fly-maps in 13CO (2-1) and C18O (2-1) with a high spectral resolution of 0.1 km s-1 and a spatial resolution of 12''. These maps cover an area of ~1.5 square degrees and include the two main clouds of W43 and the lower density gas surrounding them. A comparison to Galactic models and previous distance calculations confirms the location of W43 near the tangential point of the Scutum arm at approximately 6 kpc from the Sun. The resulting intensity cubes of the observed region are separated into subcubes, which are centered on single clouds and then analyzed in detail. The optical depth, excitation temperature, and H2 column density maps are derived out of the 13CO and C18O data. These results are then compared to those derived from Herschel dust maps. The mass of a typical cloud is several 104 M⊙ while the total mass in the dense molecular gas (>102 cm-3) in W43 is found to be ~1.9 × 106 M⊙. Probability distribution functions obtained from column density maps derived from molecular line data and Herschel imaging show a log-normal distribution for low column densities and a power-law tail for high densities. A flatter slope for the molecular line data probability distribution function may imply that those selectively show the gravitationally collapsing gas. Appendices are available in electronic form at http://www.aanda.orgThe final datacubes (13CO and C18O) for the entire survey are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A24

  10. Chemical analysis of water samples and geophysical logs from cored test holes drilled in the central Oklahoma Aquifer, Oklahoma

    USGS Publications Warehouse

    Schlottmann, Jamie L.; Funkhouser, Ron A.

    1991-01-01

    Chemical analyses of water from eight test holes and geophysical logs for nine test holes drilled in the Central Oklahoma aquifer are presented. The test holes were drilled to investigate local occurrences of potentially toxic, naturally occurring trace substances in ground water. These trace substances include arsenic, chromium, selenium, residual alpha-particle activities, and uranium. Eight of the nine test holes were drilled near wells known to contain large concentrations of one or more of the naturally occurring trace substances. One test hole was drilled in an area known to have only small concentrations of any of the naturally occurring trace substances.Water samples were collected from one to eight individual sandstone layers within each test hole. A total of 28 water samples, including four duplicate samples, were collected. The temperature, pH, specific conductance, alkalinity, and dissolved-oxygen concentrations were measured at the sample site. Laboratory determinations included major ions, nutrients, dissolved organic carbon, and trace elements (aluminum, arsenic, barium, beryllium, boron, cadmium, chromium, hexavalent chromium, cobalt, copper, iron, lead, lithium, manganese, mercury, molybdenum, nickel, selenium, silver, strontium, vanadium and zinc). Radionuclide activities and stable isotope (5 values also were determined, including: gross-alpha-particle activity, gross-beta-particle activity, radium-226, radium-228, radon-222, uranium-234, uranium-235, uranium-238, total uranium, carbon-13/carbon-12, deuterium/hydrogen-1, oxygen-18/oxygen-16, and sulfur-34/sulfur-32. Additional analyses of arsenic and selenium species are presented for selected samples as well as analyses of density and iodine for two samples, tritium for three samples, and carbon-14 for one sample.Geophysical logs for most test holes include caliper, neutron, gamma-gamma, natural-gamma logs, spontaneous potential, long- and short-normal resistivity, and single-point resistance. Logs for test-hole NOTS 7 do not include long- and short-normal resistivity, spontaneous-potential, or single-point resistivity. Logs for test-hole NOTS 7A include only caliper and natural-gamma logs.

  11. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  12. IMPROVED V II log(gf) VALUES, HYPERFINE STRUCTURE CONSTANTS, AND ABUNDANCE DETERMINATIONS IN THE PHOTOSPHERES OF THE SUN AND METAL-POOR STAR HD 84937

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, M. P.; Lawler, J. E.; Den Hartog, E. A.

    2014-10-01

    New experimental absolute atomic transition probabilities are reported for 203 lines of V II. Branching fractions are measured from spectra recorded using a Fourier transform spectrometer and an echelle spectrometer. The branching fractions are normalized with radiative lifetime measurements to determine the new transition probabilities. Generally good agreement is found between this work and previously reported V II transition probabilities. Two spectrometers, independent radiometric calibration methods, and independent data analysis routines enable a reduction in systematic uncertainties, in particular those due to optical depth errors. In addition, new hyperfine structure constants are measured for selected levels by least squares fittingmore » line profiles in the FTS spectra. The new V II data are applied to high resolution visible and UV spectra of the Sun and metal-poor star HD 84937 to determine new, more accurate V abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. Very good agreement is found between our new solar photospheric V abundance, log ε(V) = 3.95 from 15 V II lines, and the solar-system meteoritic value. In HD 84937, we derive [V/H] = –2.08 from 68 lines, leading to a value of [V/Fe] = 0.24.« less

  13. Probabilistic approaches to accounting for data variability in the practical application of bioavailability in predicting aquatic risks from metals.

    PubMed

    Ciffroy, Philippe; Charlatchka, Rayna; Ferreira, Daniel; Marang, Laura

    2013-07-01

    The biotic ligand model (BLM) theoretically enables the derivation of environmental quality standards that are based on true bioavailable fractions of metals. Several physicochemical variables (especially pH, major cations, dissolved organic carbon, and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This article describes probabilistic approaches for integrating such variability during the derivation of risk indexes. To describe each variable using a probability density function (PDF), several methods were combined to 1) treat censored data (i.e., data below the limit of detection), 2) incorporate the uncertainty of the solid-to-liquid partitioning of metals, and 3) detect outliers. From a probabilistic perspective, 2 alternative approaches that are based on log-normal and Γ distributions were tested to estimate the probability of the predicted environmental concentration (PEC) exceeding the predicted non-effect concentration (PNEC), i.e., p(PEC/PNEC>1). The probabilistic approach was tested on 4 real-case studies based on Cu-related data collected from stations on the Loire and Moselle rivers. The approach described in this article is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Copyright © 2013 SETAC.

  14. Permeability structure and its influence on microbial activity at off-Shimokita basin, Japan

    NASA Astrophysics Data System (ADS)

    Tanikawa, W.; Yamada, Y.; Sanada, Y.; Kubo, Y.; Inagaki, F.

    2016-12-01

    The microbial populations and the limit of microbial life are probably limited by chemical, physical, and geological conditions, such as temperature, pore water chemistry, pH, and water activity; however, the key parameters affecting growth in deep subseafloor sediments remain unclarified (Hinrichs and Inagaki 2012). IODP expedition 337 was conducted near a continental margin basin off Shimokita Peninsula, Japan to investigate the microbial activity under deep marine coalbed sediments down to 2500 mbsf. Inagaki et al. (2015) discovered that microbial abundance decreased markedly with depth (the lowest cell density of <1 cell/cm3 was recorded below 2000 mbsf), and that the coal bed layers had relatively higher cell densities. In this study, permeability was measured on core samples from IODP Expedition 337 and Expedition CK06-06 in the D/V Chikyu shakedown cruise. Permeability was measured at in-situ effective pressure condition. Permeability was calculated by the steady state flow method by keeping differential pore pressure from 0.1 to 0.8 MPa.Our results show that the permeability for core samples decreases with depth from 10-16 m2 on the seafloor to 10-20 m2 at the bottom of hole. However, permeability is highly scattered within the coal bed unit (1900 to 2000 mbsf). Permeabilities for sandstone and coal is higher than those for siltstone and shale, therefore the scatter of the permeabilities at the same unit is due to the high variation of lithology. The highest permeability was observed in coal samples and this is probably due to formation of micro cracks (cleats). Permeability estimated from the NMR logging using the empirical parameters is around two orders of magnitude higher than permeability of core samples, even though the relative permeability variation at vertical direction is quite similar between core and logging data.The higher cell density is observed in the relatively permeable formation. On the other hand, the correlation between cell density, water activity, and porosity is not clear. On the assumption that pressure gradient is constant through the depth, flow rate can be proportional to permeability of sediments. Flow rate probably restricts the availability of energy and nutrient for microorganism, therefore permeability might have influenced on the microbial activity in the coalbed basin.

  15. The 4 Ms CHANDRA Deep Field-South Number Counts Apportioned by Source Class: Pervasive Active Galactic Nuclei and the Ascent of Normal Galaxies

    NASA Technical Reports Server (NTRS)

    Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.; hide

    2012-01-01

    We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.

  16. Responses of crayfish photoreceptor cells following intense light adaptation.

    PubMed

    Cummins, D R; Goldsmith, T H

    1986-01-01

    After intense orange adapting exposures that convert 80% of the rhodopsin in the eye to metarhodopsin, rhabdoms become covered with accessory pigment and appear to lose some microvillar order. Only after a delay of hours or even days is the metarhodopsin replaced by rhodopsin (Cronin and Goldsmith 1984). After 24 h of dark adaptation, when there has been little recovery of visual pigment, the photoreceptor cells have normal resting potentials and input resistances, and the reversal potential of the light response is 10-15 mV (inside positive), unchanged from controls. The log V vs log I curve is shifted about 0.6 log units to the right on the energy axis, quantitatively consistent with the decrease in the probability of quantum catch expected from the lowered concentration of rhodopsin in the rhabdoms. Furthermore, at 24 h the photoreceptors exhibit a broader spectral sensitivity than controls, which is also expected from accumulations of metarhodopsin in the rhabdoms. In three other respects, however, the transduction process appears to be light adapted: The voltage responses are more phasic than those of control photoreceptors. The relatively larger effect (compared to controls) of low extracellular Ca++ (1 mmol/l EGTA) in potentiating the photoresponses suggests that the photoreceptors may have elevated levels of free cytoplasmic Ca++. The saturating depolarization is only about 30% as large as the maximal receptor potentials of contralateral, dark controls, and by that measure the log V-log I curve is shifted downward by 0.54 log units.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Design and characterization of a cough simulator.

    PubMed

    Zhang, Bo; Zhu, Chao; Ji, Zhiming; Lin, Chao-Hsin

    2017-02-23

    Expiratory droplets from human coughing have always been considered as potential carriers of pathogens, responsible for respiratory infectious disease transmission. To study the transmission of disease by human coughing, a transient repeatable cough simulator has been designed and built. Cough droplets are generated by different mechanisms, such as the breaking of mucus, condensation and high-speed atomization from different depths of the respiratory tract. These mechanisms in coughing produce droplets of different sizes, represented by a bimodal distribution of 'fine' and 'coarse' droplets. A cough simulator is hence designed to generate transient sprays with such bimodal characteristics. It consists of a pressurized gas tank, a nebulizer and an ejector, connected in series, which are controlled by computerized solenoid valves. The bimodal droplet size distribution is characterized for the coarse droplets and fine droplets, by fibrous collection and laser diffraction, respectively. The measured size distributions of coarse and fine droplets are reasonably represented by the Rosin-Rammler and log-normal distributions in probability density function, which leads to a bimodal distribution. To assess the hydrodynamic consequences of coughing including droplet vaporization and polydispersion, a Lagrangian model of droplet trajectories is established, with its ambient flow field predetermined from a computational fluid dynamics simulation.

  18. Rotations of large inertial cubes, cuboids, cones, and cylinders in turbulence

    NASA Astrophysics Data System (ADS)

    Pujara, Nimish; Oehmke, Theresa B.; Bordoloi, Ankur D.; Variano, Evan A.

    2018-05-01

    We conduct experiments to investigate the rotations of freely moving particles in a homogeneous isotropic turbulent flow. The particles are nearly neutrally buoyant and the particle size exceeds the Kolmogorov scale so that they are too large to be considered passive tracers. Particles of several different shapes are considered including those that break axisymmetry and fore-aft symmetry. We find that regardless of shape the mean-square particle angular velocity scales as deq -4 /3, where de q is the equivalent diameter of a volume-matched sphere. This scaling behavior is consistent with the notion that velocity differences across a length de q in the flow are responsible for particle rotation. We also find that the probability density functions (PDFs) of particle angular velocity collapse for particles of different shapes and similar de q. The significance of these results is that the rotations of an inertial, nonspherical particle are only functions of its volume and not its shape. The magnitude of particle angular velocity appears log-normally distributed and individual Cartesian components show long tails. With increasing de q, the tails of the PDF become less pronounced, meaning that extreme events of angular velocity become less common for larger particles.

  19. Geostatistics and Bayesian updating for transmissivity estimation in a multiaquifer system in Manitoba, Canada.

    PubMed

    Kennedy, Paula L; Woodbury, Allan D

    2002-01-01

    In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.

  20. Scintillation statistics measured in an earth-space-earth retroreflector link

    NASA Technical Reports Server (NTRS)

    Bufton, J. L.

    1977-01-01

    Scintillation was measured in a vertical path from a ground-based laser transmitter to the Geos 3 satellite and back to a ground-based receiver telescope and, the experimental results were compared with analytical results presented in a companion paper (Bufton, 1977). The normalized variance, the probability density function and the power spectral density of scintillation were all measured. Moments of the satellite scintillation data in terms of normalized variance were lower than expected. The power spectrum analysis suggests that there were scintillation components at frequencies higher than the 250 Hz bandwidth available in the experiment.

  1. Procedural revision to the use-dilution methods: establishment of maximum log density value for test microbes on inoculated carriers.

    PubMed

    Tomasino, Stephen F; Pines, Rebecca M; Hamilton, Gordon C

    2012-01-01

    (Staphylococcus aureus) and 964.02 (Pseudomonas aeruginosa), were revised in 2009 to include a standardized procedure to measure the log density of the test microbe and to establish a minimum mean log density value of 6.0 (geometric mean of 1.0 x 10(6) CFU/carrier) to qualify the test results. This report proposes setting a maximum mean log density value of 7.0 (geometric mean of 1.0 x 10(7) CFU/carrier) to further standardize the procedure. The minimum value was based on carrier count data collected by four laboratories over an 8-year period (1999-2006). The data have been updated to include an additional 4 years' worth of data (2006-2010) collected by the same laboratories. A total of 512 tests were conducted on products bearing claims against P. aeruginosa and S. aureus with and without an organic soil load (OSL) added to the inoculum (as specified on the product label claim). Six carriers were assayed in each test, for a total of 3072 carriers. Mean log densities for each of the 512 tests were at least 6.0. With the exception of two tests, one for P. aeruginosa without OSL and one for S. aureus with OSL, the mean log densities did not exceed 7.5 (geometric mean of 3.2 x 10(7) CFU/carrier). Across microbes and OSL treatments, the mean log density (+/- SEM) was 6.80 (+/- 0.07) per carrier (a geometric mean of 6.32 x 10(6) CFUlcarrier) and acceptable repeatability (0.28) and reproducibility (0.31) SDs were exhibited. A maximum mean log density per carrier of 7.0 is being proposed here as a validity requirement for S. aureus and P. aeruginosa. A modification to the method to allow for dilution of the final test cultures to achieve carrier counts within 6.0-7.0 logs is also being proposed. Establishing a range of 6.0-7.0 logs will help improve the reliability of the method and should allow for more consistent results within and among laboratories.

  2. Dynamical Epidemic Suppression Using Stochastic Prediction and Control

    DTIC Science & Technology

    2004-10-28

    initial probability density function (PDF), p: D C R2 -- R, is defined by the stochastic Frobenius - Perron For deterministic systems, normal methods of...induced chaos. To analyze the qualitative change, we apply the technique of the stochastic Frobenius - Perron operator [L. Billings et al., Phys. Rev. Lett...transition matrix describing the probability of transport from one region of phase space to another, which approximates the stochastic Frobenius - Perron

  3. Density of large snags and logs in northern Arizona mixed-conifer and ponderosa pine forests

    Treesearch

    Joseph L. Ganey; Benjamin J. Bird; L. Scott Baggett; Jeffrey S. Jenness

    2015-01-01

    Large snags and logs provide important biological legacies and resources for native wildlife, yet data on populations of large snags and logs and factors influencing those populations are sparse. We monitored populations of large snags and logs in mixed-conifer and ponderosa pine (Pinus ponderosa) forests in northern Arizona from 1997 through 2012. We modeled density...

  4. Wood density-moisture profiles in old-growth Douglas-fir and western hemlock.

    Treesearch

    W.Y. Pong; Dale R. Waddell; Lambert Michael B.

    1986-01-01

    Accurate estimation of the weight of each load of logs is necessary for safe and efficient aerial logging operations. The prediction of green density (lb/ft3) as a function of height is a critical element in the accurate estimation of tree bole and log weights. Two sampling methods, disk and increment core (Bergstrom xylodensimeter), were used to measure the density-...

  5. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  6. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  7. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  8. Observed, unknown distributions of clinical chemical quantities should be considered to be log-normal: a proposal.

    PubMed

    Haeckel, Rainer; Wosniok, Werner

    2010-10-01

    The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  10. 47 CFR 76.1706 - Signal leakage logs and repair records.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the probable cause of the leakage. The log shall be kept on file for a period of two years and shall... 47 Telecommunication 4 2010-10-01 2010-10-01 false Signal leakage logs and repair records. 76.1706... leakage logs and repair records. Cable operators shall maintain a log showing the date and location of...

  11. 47 CFR 76.1706 - Signal leakage logs and repair records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the probable cause of the leakage. The log shall be kept on file for a period of two years and shall... 47 Telecommunication 4 2011-10-01 2011-10-01 false Signal leakage logs and repair records. 76.1706... leakage logs and repair records. Cable operators shall maintain a log showing the date and location of...

  12. Intermittent burst of a super rogue wave in the breathing multi-soliton regime of an anomalous fiber ring cavity.

    PubMed

    Lee, Seungjong; Park, Kyoungyoon; Kim, Hyuntai; Vazquez-Zuniga, Luis Alonso; Kim, Jinseob; Jeong, Yoonchan

    2018-04-30

    We report the intermittent burst of a super rogue wave in the multi-soliton (MS) regime of an anomalous-dispersion fiber ring cavity. We exploit the spatio-temporal measurement technique to log and capture the shot-to-shot wave dynamics of various pulse events in the cavity, and obtain the corresponding intensity probability density function, which eventually unveils the inherent nature of the extreme events encompassed therein. In the breathing MS regime, a specific MS regime with heavy soliton population, the natural probability of pulse interaction among solitons and dispersive waves exponentially increases owing to the extraordinarily high soliton population density. Combination of the probabilistically started soliton interactions and subsequently accompanying dispersive waves in their vicinity triggers an avalanche of extreme events with even higher intensities, culminating to a burst of a super rogue wave nearly ten times stronger than the average solitons observed in the cavity. Without any cavity modification or control, the process naturally and intermittently recurs within a time scale in the order of ten seconds.

  13. Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.

    PubMed

    Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe

    2013-04-01

    Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.

  14. A cross-site comparison of methods used for hydrogeologic characterization of the Galena-Platteville aquifer in Illinois and Wisconsin, with examples from selected Superfund sites

    USGS Publications Warehouse

    Kay, Robert T.; Mills, Patrick C.; Dunning, Charles P.; Yeskis, Douglas J.; Ursic, James R.; Vendl, Mark

    2004-01-01

    The effectiveness of 28 methods used to characterize the fractured Galena-Platteville aquifer at eight sites in northern Illinois and Wisconsin is evaluated. Analysis of government databases, previous investigations, topographic maps, aerial photographs, and outcrops was essential to understanding the hydrogeology in the area to be investigated. The effectiveness of surface-geophysical methods depended on site geology. Lithologic logging provided essential information for site characterization. Cores were used for stratigraphy and geotechnical analysis. Natural-gamma logging helped identify the effect of lithology on the location of secondary- permeability features. Caliper logging identified large secondary-permeability features. Neutron logs identified trends in matrix porosity. Acoustic-televiewer logs identified numerous secondary-permeability features and their orientation. Borehole-camera logs also identified a number of secondary-permeability features. Borehole ground-penetrating radar identified lithologic and secondary-permeability features. However, the accuracy and completeness of this method is uncertain. Single-point-resistance, density, and normal resistivity logs were of limited use. Water-level and water-quality data identified flow directions and indicated the horizontal and vertical distribution of aquifer permeability and the depth of the permeable features. Temperature, spontaneous potential, and fluid-resistivity logging identified few secondary-permeability features at some sites and several features at others. Flowmeter logging was the most effective geophysical method for characterizing secondary-permeability features. Aquifer tests provided insight into the permeability distribution, identified hydraulically interconnected features, the presence of heterogeneity and anisotropy, and determined effective porosity. Aquifer heterogeneity prevented calculation of accurate hydraulic properties from some tests. Different methods, such as flowmeter logging and slug testing, occasionally produced different interpretations. Aquifer characterization improved with an increase in the number of data points, the period of data collection, and the number of methods used.

  15. The probability density function (PDF) of Lagrangian Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, B.

    2012-12-01

    The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the differential equation determining the generalized hyperbolic distributions. Then we compare these PDFs with DNS results and experimental data.

  16. Improving the AOAC use-dilution method by establishing a minimum log density value for test microbes on inoculated carriers.

    PubMed

    Tomasino, Stephen F; Pines, Rebecca M; Hamilton, Martin A

    2009-01-01

    The AOAC Use-Dilution methods, 955.14 (Salmonella enterica), 955.15 (Staphylococcus aureus), and 964.02 (Pseudomonas aeruginosa), are used to measure the efficacy of disinfectants on hard inanimate surfaces. The methods do not provide procedures to assess log density of the test microbe on inoculated penicylinders (carrier counts). Without a method to measure and monitor carrier counts, the associated efficacy data may not be reliable and repeatable. This report provides a standardized procedure to address this method deficiency. Based on carrier count data collected by four laboratories over an 8 year period, a minimum log density value is proposed to qualify the test results. Carrier count data were collected concurrently with 242 Use-Dilution tests. The tests were conducted on products bearing claims against P. aeruginosa and S. aureus with and without an organic soil load (OSL) added to the inoculum (as specified on the product label claim). Six carriers were assayed per test for a total of 1452 carriers. All 242 mean log densities were at least 6.0 (geometric mean of 1.0 x 10(6) CFU/carrier). The mean log densities did not exceed 7.5 (geometric mean of 3.2 x 10(7) CFU/carrier). For all microbes and OSL treatments, the mean log density (+/- SEM) was 6.7 (+/- 0.07) per carrier (a geometric mean of 5.39 x 10(6) CFU/carrier). The mean log density for six carriers per test showed good repeatability (0.29) and reproducibility (0.32). A minimum mean log density of 6.0 is proposed as a validity requirement for S. aureus and P. aeruginosa. The minimum level provides for the potential inherent variability that may be experienced by a wide range of laboratories and the slight effect due to the addition of an OSL. A follow-up report is planned to present data to support the carrier count procedure and carrier counts for S. enterica.

  17. Log-Normal Turbulence Dissipation in Global Ocean Models

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  18. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    PubMed

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  19. Influence of particle size distribution on nanopowder cold compaction processes

    NASA Astrophysics Data System (ADS)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  20. Association Between Vessel Density and Visual Acuity in Patients With Diabetic Retinopathy and Poorly Controlled Type 1 Diabetes.

    PubMed

    Dupas, Bénédicte; Minvielle, Wilfried; Bonnin, Sophie; Couturier, Aude; Erginay, Ali; Massin, Pascale; Gaudric, Alain; Tadayoni, Ramin

    2018-05-10

    Capillary dropout is a hallmark of diabetic retinopathy, but its role in visual loss remains unclear. To examine how macular vessel density is correlated with visual acuity (VA) in patients younger than 40 years who have type 1 diabetes without macular edema but who have diabetic retinopathy requiring panretinal photocoagulation. Retrospective cohort study of VA and optical coherence tomography angiography data collected from consecutive patients during a single visit to Lariboisière Hospital, a tertiary referral center in Paris, France. The cohort included 22 eyes of 22 patients with type 1 diabetes without macular edema but with bilateral rapidly progressive diabetic retinopathy that was treated with panretinal photocoagulation between August 15, 2015, and December 30, 2016. Eyes were classified into 2 groups by VA: normal (logMAR, 0; Snellen equivalent, 20/20) and decreased (logMAR, >0; Snellen equivalent, <20/20). The control group included 12 eyes from age-matched healthy participants with normal vision. Visual acuity and mean vessel density in 4 retinal vascular plexuses: the superficial vascular plexus and the deep capillary complex, which comprises the intermediate capillary plexus and the deep capillary plexus. Of the 22 participants, 11 (50%) were men, mean (SD) age was 30 (6) years, and mean (SD) hemoglobin A1c level was 8.9% (1.6%). Of the 22 eyes with diabetic retinopathy, 13 (59%) had normal VA and 9 (41%) had decreased VA (mean [SD]: logMAR, 0.12 [0.04]; Snellen equivalent, 20/25). Mean [SE] vessel density was lower for eyes with diabetic retinopathy and normal VA compared with the control group in the superficial vascular plexus (44.1% [0.9%] vs 49.1% [0.9%]; difference, -5.0% [1.3%]; 95% CI, -7.5% to -2.4%; P < .001), in the deep capillary complex (44.3% [1.2%] vs 50.6% [1.3%]; difference, -6.3% [1.8%]; 95% CI, -9.9% to -2.7%; P = .001), in the intermediate capillary plexus (43.8% [1.2%] vs 49.3% [1.2%]; difference, -5.5% [1.7%]; 95% CI, -9.0% to -2.0%; P = .003), and in the deep capillary plexus (24.5% [1.0%] vs 30.5% [1.0%]; difference, -6.1% [1.4%]; 95% CI, -8.9% to -3.2%; P < .001). Mean vessel density was lower in eyes with diabetic retinopathy and decreased VA compared with eyes with diabetic retinopathy and normal VA; the mean (SE) loss was more pronounced in the deep capillary complex (34.6% [1.5%] vs 44.3% [1.2%]; difference, -9.6% [1.9%]; 95% CI, -13.6% to -5.7%; P < .001), especially in the deep capillary plexus (15.2% [1.2%] vs 24.5% [1.0%]; difference, -9.3% [1.5%]; 95% CI, -12.4% to -6.1%; P < .001), than in the superficial vascular plexus (39.6% [1.1%] vs 44.1% [0.9%]; difference, -4.5% [1.4%]; 95% CI, -7.3% to -1.7%; P = .002). These data suggest that in patients with type 1 diabetes without macular edema but with severe nonproliferative or proliferative diabetic retinopathy, decreased VA may be associated with the degree of capillary loss in the deep capillary complex.

  1. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  2. Canopy Spectral Invariants. Part 2; Application to Classification of Forest Types from Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Schull, M. A.; Knyazikhin, Y.; Xu, L.; Samanta, A.; Carmona, P. L.; Lepine, L.; Jenkins, J. P.; Ganguly, S.; Myneni, R. B.

    2011-01-01

    Many studies have been conducted to demonstrate the ability of hyperspectral data to discriminate plant dominant species. Most of them have employed the use of empirically based techniques, which are site specific, requires some initial training based on characteristics of known leaf and/or canopy spectra and therefore may not be extendable to operational use or adapted to changing or unknown land cover. In this paper we propose a physically based approach for separation of dominant forest type using hyperspectral data. The radiative transfer theory of canopy spectral invariants underlies the approach, which facilitates parameterization of the canopy reflectance in terms of the leaf spectral scattering and two spectrally invariant and structurally varying variables - recollision and directional escape probabilities. The methodology is based on the idea of retrieving spectrally invariant parameters from hyperspectral data first, and then relating their values to structural characteristics of three-dimensional canopy structure. Theoretical and empirical analyses of ground and airborne data acquired by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over two sites in New England, USA, suggest that the canopy spectral invariants convey information about canopy structure at both the macro- and micro-scales. The total escape probability (one minus recollision probability) varies as a power function with the exponent related to the number of nested hierarchical levels present in the pixel. Its base is a geometrical mean of the local total escape probabilities and accounts for the cumulative effect of canopy structure over a wide range of scales. The ratio of the directional to the total escape probability becomes independent of the number of hierarchical levels and is a function of the canopy structure at the macro-scale such as tree spatial distribution, crown shape and size, within-crown foliage density and ground cover. These properties allow for the natural separation of dominant forest classes based on the location of points on the total escape probability vs the ratio log-log plane.

  3. Gradually truncated log-normal in USA publicly traded firm size distribution

    NASA Astrophysics Data System (ADS)

    Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.

    2007-03-01

    We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.

  4. A Bayesian Surrogate for Regional Skew in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-06-01

    The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.

  5. Faster search by lackadaisical quantum walk

    NASA Astrophysics Data System (ADS)

    Wong, Thomas G.

    2018-03-01

    In the typical model, a discrete-time coined quantum walk searching the 2D grid for a marked vertex achieves a success probability of O(1/log N) in O(√{N log N}) steps, which with amplitude amplification yields an overall runtime of O(√{N} log N). We show that making the quantum walk lackadaisical or lazy by adding a self-loop of weight 4 / N to each vertex speeds up the search, causing the success probability to reach a constant near 1 in O(√{N log N}) steps, thus yielding an O(√{log N}) improvement over the typical, loopless algorithm. This improved runtime matches the best known quantum algorithms for this search problem. Our results are based on numerical simulations since the algorithm is not an instance of the abstract search algorithm.

  6. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  7. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  8. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  9. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  10. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    NASA Astrophysics Data System (ADS)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.

  11. [The effect of forest exploitation on structure, diversity, and floristic composition of palmito-dominated Atlantic forests at Misiones, Argentina].

    PubMed

    Chediack, Sandra E

    2008-06-01

    The effect of forest exploitation--timber and palmito (Euterpe edulis, Palmae) extraction--on structure, diversity, and floristic composition of forests known as palmitals of the Atlantic Forest of Argentina was analyzed. These palmitals are located in Misiones (54 degrees 13' W and 25 degrees 41' S). Three 1 ha permanent plots were established: two in the "intangible" zone of the Iguazu National Park (PNI), and another in an exploited forest site bordering the PNI. Three 0.2 ha non-permanent plots were also measured. One was located in the PNI reserve zone where illegal palmito extraction occurs. The other two were in logged forest. All trees and palmitos with DBH >10 cm were identified and DBH and height were measured. For each of the six sites, richness and diversity of tree species, floristic composition, number of endemic species, and density of harvestable tree species were estimated. The harvest of E. edulis increases density of other tree species, diminishing palmito density. Forest explotation (logging and palmito harvest) is accompanied by an increase in diversity and density of heliophilic species, which have greater timber value in the region. However, this explotation also diminishes the density of palmito, of endemic species which normally grow in low densities, and of species found on the IUCN Red List. Results suggest that forest structure may be managed for timber and palmito production. The "intangible" zone of the PNI has the greatest conservation value in the Atlantic Forest, since a greater number of endemisms and endangered species are found here.

  12. Magnetospheric electron density long-term (>1 day) refilling rates inferred from passive radio emissions measured by IMAGE RPI during geomagnetically quiet times

    NASA Astrophysics Data System (ADS)

    Denton, R. E.; Wang, Y.; Webb, P. A.; Tengdin, P. M.; Goldstein, J.; Redfern, J. A.; Reinisch, B. W.

    2012-03-01

    Using measurements of the electron density ne found from passive radio wave observations by the IMAGE spacecraft RPI instrument on consecutive passes through the magnetosphere, we calculate the long-term (>1 day) refilling rate of equatorial electron density dne,eq/dt from L = 2 to 9. Our events did not exhibit saturation, probably because our data set did not include a deep solar minimum and because saturation is an unusual occurrence, especially outside of solar minimum. The median rate in cm-3/day can be modeled with log10(dne,eq/dt) = 2.22 - 0.006L - 0.0347L2, while the third quartile rate can be modeled with log10(dne,eq/dt) = 3.39 - 0.353L, and the mean rate can be modeled as log10(dne,eq/dt) = 2.74 - 0.269L. These statistical values are found from the ensemble of all observed rates at each L value, including negative rates (decreases in density due to azimuthal structure or radial motion or for other reasons), in order to characterize the typical behavior. The first quartile rates are usually negative for L < 4.7 and close to zero for larger L values. Our rates are roughly consistent with previous observations of ion refilling at geostationary orbit. Most previous studies of refilling found larger refilling rates, but many of these examined a single event which may have exhibited unusually rapid refilling. Comparing refilling rates at solar maximum to those at solar minimum, we found that the refilling rate is larger at solar maximum for small L < 4, about the same at solar maximum and solar minimum for L = 4.2 to 5.8, and is larger at solar minimum for large L > 5.8 such as at geostationary orbit (L ˜ 6.8) (at least to L of about 8). These results agree with previous results for ion refilling at geostationary orbit, may agree with previous results at lower L, and are consistent with some trends for ionospheric density.

  13. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  14. 40 CFR 146.22 - Construction requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... caliper logs before casing is installed; and (B) A cement bond, temperature, or density log after the...; and (C) A cement bond, temperature, or density log after the casing is set and cemented. (g) At a... drinking water. The casing and cement used in the construction of each newly drilled well shall be designed...

  15. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  16. Stylized facts in internal rates of return on stock index and its derivative transactions

    NASA Astrophysics Data System (ADS)

    Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya

    2007-08-01

    Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.

  17. A computer program for borehole compensation of dual-detector density well logs

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The computer program described in this report was developed for applying a borehole-rugosity and mudcake compensation algorithm to dual-density logs using the following information: the water level in the drill hole, hole diameter (from a caliper log if available, or the nominal drill diameter if not), and the two gamma-ray count rate logs from the near and far detectors of the density probe. The equations that represent the compensation algorithm and the calibration of the two detectors (for converting countrate or density) were derived specifically for a probe manufactured by Comprobe Inc. (5.4 cm O.D. dual-density-caliper); they are not applicable to other probes. However, equivalent calibration and compensation equations can be empirically determined for any other similar two-detector density probes and substituted in the computer program listed in this report. * Use of brand names in this report does not necessarily constitute endorsement by the U.S. Geological Survey.

  18. WE-AB-202-01: Evaluating the Toxicity Reduction with CT-Ventilation Functional Avoidance Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Y; Miyasaka, Y; Kadoya, N

    Purpose: CT-ventilation is an exciting new imaging modality that uses 4DCTs to calculate lung ventilation. Studies have proposed to use 4DCT-ventilation imaging for functional avoidance radiotherapy which implies designing treatment plans to spare functional portions of the lung. Although retrospective studies have been performed to evaluate the dosimetric gains to functional lung; no work has been done to translate the dosimetric gains to an improvement in pulmonary toxicity. The purpose of our work was to evaluate the potential reduction in toxicity for 4DCT-ventilation based functional avoidance. Methods: 70 lung cancer patients with 4DCT imaging were used for the study. CT-ventilationmore » maps were calculated using the patient’s 4DCT, deformable image registrations, and a density-change-based algorithm. Radiation pneumonitis was graded using imaging and clinical information. Log-likelihood methods were used to fit a normal-tissue-complication-probability (NTCP) model predicting grade 2+ radiation pneumonitis as a function of doses (mean and V20) to functional lung (>15% ventilation). For 20 patients a functional plan was generated that reduced dose to functional lung while meeting RTOG 0617-based constraints. The NTCP model was applied to the functional plan to determine the reduction in toxicity with functional planning Results: The mean dose to functional lung was 16.8 and 17.7 Gy with the functional and clinical plans respectively. The corresponding grade 2+ pneumonitis probability was 26.9% with the clinically-used plan and 24.6% with the functional plan (8.5% reduction). The V20-based grade 2+ pneumonitis probability was 23.7% with the clinically-used plan and reduced to 19.6% with the functional plan (20.9% reduction). Conclusion: Our results revealed a reduction of 9–20% in complication probability with functional planning. To our knowledge this is the first study to apply complication probability to convert dosimetric results to toxicity improvement. The results presented in the current work provide seminal data for prospective clinical trials in functional avoidance. YV discloses funding from State of Colorado. TY discloses National Lung Cancer Partnership; Young Investigator Research grant.« less

  19. Modeling the size-density relationship in direct-seeded slash pine stands

    Treesearch

    Quang V. Cao; Thomas J. Dean; V. Clark Baldwin

    2000-01-01

    The relationship between quadratic mean diameter and tree density appeared curvilinear on a log–log scale, based on data from direct-seeded slash pine (Pinus elliotti var. elliotti Engelm.) stands. The self-thinning trajectory followed a straight line for high tree density levels and then turned away from this line as tree density...

  20. Log evaluation in wells drilled with inverted oil emulsion mud. [GLOBAL program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.P.; Lacour-Gayet, P.J.; Suau, J.

    1981-01-01

    As greater use is made of inverted oil emulsion, muds in the development of North Sea oil fields, the need for more precise log evaluation in this environment becomes apparent. This paper demonstrates an approach using the Dual Induction Log, taking into account invasion and boundary effects. Lithology and porosity are derived from the Formation Density or Litho-Density Log, Compensated Neutron Log, Sonic Log and the Natural Gamma Ray Spectrometry log. The effect of invasion by the oil component of the mud filtrate is treated in the evaluation, and a measurement of Moved Water is made Computations of petrophysical propertiesmore » are implemented by means of the GLOBAL interpretation program, taking advantage of its capability of adaption to any combination of logging sensors. 8 refs.« less

  1. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    NASA Astrophysics Data System (ADS)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  2. Analysis of Electronic Densities and Integrated Doses in Multiform Glioblastomas Stereotactic Radiotherapy

    NASA Astrophysics Data System (ADS)

    Barón-Aznar, C.; Moreno-Jiménez, S.; Celis, M. A.; Lárraga-Gutiérrez, J. M.; Ballesteros-Zebadúa, P.

    2008-08-01

    Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScansoftware, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.

  3. GLOBAL RATES OF CONVERGENCE OF THE MLES OF LOG-CONCAVE AND s-CONCAVE DENSITIES

    PubMed Central

    Doss, Charles R.; Wellner, Jon A.

    2017-01-01

    We establish global rates of convergence for the Maximum Likelihood Estimators (MLEs) of log-concave and s-concave densities on ℝ. The main finding is that the rate of convergence of the MLE in the Hellinger metric is no worse than n−2/5 when −1 < s < ∞ where s = 0 corresponds to the log-concave case. We also show that the MLE does not exist for the classes of s-concave densities with s < −1. PMID:28966409

  4. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  5. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  6. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  7. A Study of Stranding of Juvenile Salmon by Ship Wakes Along the Lower Columbia River Using a Before-and-After Design: Before-Phase Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearson, Walter H.; Skalski, J R.; Sobocinski, Kathryn L.

    2006-02-01

    Ship wakes produced by deep-draft vessels transiting the lower Columbia River have been observed to cause stranding of juvenile salmon. Proposed deepening of the Columbia River navigation channel has raised concerns about the potential impact of the deepening project on juvenile salmon stranding. The Portland District of the U.S. Army Corps of Engineers requested that the Pacific Northwest National Laboratory design and conduct a study to assess stranding impacts that may be associated with channel deepening. The basic study design was a multivariate analysis of covariance of field observations and measurements under a statistical design for a before and aftermore » impact comparison. We have summarized field activities and statistical analyses for the ?before? component of the study here. Stranding occurred at all three sampling sites and during all three sampling seasons (Summer 2004, Winter 2005, and Spring 2005), for a total of 46 stranding events during 126 observed vessel passages. The highest occurrence of stranding occurred at Barlow Point, WA, where 53% of the observed events resulted in stranding. Other sites included Sauvie Island, OR (37%) and County Line Park, WA (15%). To develop an appropriate impact assessment model that accounted for relevant covariates, regression analyses were conducted to determine the relationships between stranding probability and other factors. Nineteen independent variables were considered as potential factors affecting the incidence of juvenile salmon stranding, including tidal stage, tidal height, river flow, current velocity, ship type, ship direction, ship condition (loaded/unloaded), ship speed, ship size, and a proxy variable for ship kinetic energy. In addition to the ambient and ship characteristics listed above, site, season, and fish density were also considered. Although no single factor appears as the primary factor for stranding, statistical analyses of the covariates resulted in the following equations: (1) Stranding Probability {approx} Location + Kinetic Energy Proxy + Tidal Height + Salmonid Density + Kinetic energy proxy ? Tidal Height + Tidal Height x Salmonid Density. (2) Stranding Probability {approx} Location + Total Wave Distance + Salmonid Density Index. (3) Log(Total Wave Height) {approx} Ship Block + Tidal Height + Location + Ship Speed. (4) Log(Total Wave Excursion Across the Beach) {approx} Location + Kinetic Energy Proxy + Tidal Height The above equations form the basis for a conceptual model of the factors leading to salmon stranding. The equations also form the basis for an approach for assessing impacts of dredging under the before/after study design.« less

  8. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  9. Woody debris volume depletion through decay: implications for biomass and carbon accounting

    USGS Publications Warehouse

    Fraver, Shawn; Milo, Amy M.; Bradford, John B.; D'Amato, Anthony W.; Kenefic, Laura; Palik, Brian J.; Woodall, Christopher W.; Brissette, John

    2013-01-01

    Woody debris decay rates have recently received much attention because of the need to quantify temporal changes in forest carbon stocks. Published decay rates, available for many species, are commonly used to characterize deadwood biomass and carbon depletion. However, decay rates are often derived from reductions in wood density through time, which when used to model biomass and carbon depletion are known to underestimate rate loss because they fail to account for volume reduction (changes in log shape) as decay progresses. We present a method for estimating changes in log volume through time and illustrate the method using a chronosequence approach. The method is based on the observation, confirmed herein, that decaying logs have a collapse ratio (cross-sectional height/width) that can serve as a surrogate for the volume remaining. Combining the resulting volume loss with concurrent changes in wood density from the same logs then allowed us to quantify biomass and carbon depletion for three study species. Results show that volume, density, and biomass follow distinct depletion curves during decomposition. Volume showed an initial lag period (log dimensions remained unchanged), even while wood density was being reduced. However, once volume depletion began, biomass loss (the product of density and volume depletion) occurred much more rapidly than density alone. At the temporal limit of our data, the proportion of the biomass remaining was roughly half that of the density remaining. Accounting for log volume depletion, as demonstrated in this study, provides a comprehensive characterization of deadwood decomposition, thereby improving biomass-loss and carbon-accounting models.

  10. Generalised Extreme Value Distributions Provide a Natural Hypothesis for the Shape of Seed Mass Distributions

    PubMed Central

    2015-01-01

    Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed “for normality” but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs), a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species’ life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm. PMID:25830773

  11. Interactions of spatial and luminance information in the retina of chickens during myopia development.

    PubMed

    Feldkaemper, M; Diether, S; Kleine, G; Schaeffel, F

    1999-01-01

    Degrading the retinal image by frosted eye occluders produces elongated eyes and 'deprivation myopia' in a variety of animal models. The postulated retinal 'deprivation detector' is quite sensitive to even small changes in image contrast or spatial frequency composition. Because psychophysical experiments have shown that a decline in luminance shifts the contrast sensitivity function to lower spatial frequencies, it is likely that only a reduced spatial frequency range is available for image analysis to control eye growth. It is even possible that the compression might be sufficient to promote deprivation myopia. We have tested this hypothesis, using the animal model of the chicken. (1) At an ambient illumination of 550 lux (about 76 cd m-2), neutral density (ND) filters placed in front of the eye with 0.0, 0.5 or 1.0 log unit attenuation did not change refractive development. However, monocularly or binocularly attached filters with 2 log units attenuation produced 5-7 D of myopia relative to normal eyes. Black occluders were not more effective. Frosted eye occluders with little effect on image brightness (about 0.5 log units attenuation) produced much more myopia (about 16 D compared with the controls). (2) The effects of the ND filters on refractive development could not be reproduced if the ambient illumination was reduced by 2 log units. Probably, minor effects on image quality were introduced by optical imperfections of the ND filters which were more critical at low retinal image brightness. (3) In an optomotor experiment (spatial frequency 0.2 cyc deg-1, stripe speed 57 deg sec-1), it was shown that the chickens' contrast sensitivity was severely reduced when the eyes were covered by 2.0 ND filters. (4) Since there is evidence that changes in dopamine release from the retina may be one of the factors affecting the development of myopia, we have tested how selective these changes were for spatial information. It was found that dopamine release was controlled by both spatial and luminance information and that the inputs of both could be scarcely separated. (5) Because the experiments show that the eye becomes more sensitive to image degradation at low light, the human eye may also be more prone to develop myopia if the light levels are low during extended periods of near work. Copyright 1999 Academic Press.

  12. Predicting Grade 3 Acute Diarrhea During Radiation Therapy for Rectal Cancer Using a Cutoff-Dose Logistic Regression Normal Tissue Complication Probability Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, John M., E-mail: jrobertson@beaumont.ed; Soehn, Matthias; Yan Di

    Purpose: Understanding the dose-volume relationship of small bowel irradiation and severe acute diarrhea may help reduce the incidence of this side effect during adjuvant treatment for rectal cancer. Methods and Materials: Consecutive patients treated curatively for rectal cancer were reviewed, and the maximum grade of acute diarrhea was determined. The small bowel was outlined on the treatment planning CT scan, and a dose-volume histogram was calculated for the initial pelvic treatment (45 Gy). Logistic regression models were fitted for varying cutoff-dose levels from 5 to 45 Gy in 5-Gy increments. The model with the highest LogLikelihood was used to developmore » a cutoff-dose normal tissue complication probability (NTCP) model. Results: There were a total of 152 patients (48% preoperative, 47% postoperative, 5% other), predominantly treated prone (95%) with a three-field technique (94%) and a protracted venous infusion of 5-fluorouracil (78%). Acute Grade 3 diarrhea occurred in 21%. The largest LogLikelihood was found for the cutoff-dose logistic regression model with 15 Gy as the cutoff-dose, although the models for 20 Gy and 25 Gy had similar significance. According to this model, highly significant correlations (p <0.001) between small bowel volumes receiving at least 15 Gy and toxicity exist in the considered patient population. Similar findings applied to both the preoperatively (p = 0.001) and postoperatively irradiated groups (p = 0.001). Conclusion: The incidence of Grade 3 diarrhea was significantly correlated with the volume of small bowel receiving at least 15 Gy using a cutoff-dose NTCP model.« less

  13. Discovery of a transparent sightline at ρ ≲ 20 kpc from an interacting pair of galaxies

    NASA Astrophysics Data System (ADS)

    Johnson, Sean D.; Chen, Hsiao-Wen; Mulchaey, John S.; Tripp, Todd M.; Prochaska, J. Xavier; Werk, Jessica K.

    2014-03-01

    We report the discovery of a transparent sightline at projected distances of ρ ≲ 20 kpc to an interacting pair of mature galaxies at z = 0.12. The sightline of the UV-bright quasar PG 1522+101 at zem = 1.328 passes at ρ = 11.5 kpc from the higher mass galaxy (M* = 1010.6 M⊙) and ρ = 20.4 kpc from the lower mass one (M* = 1010.0 M⊙). The two galaxies are separated by 9 kpc in projected distance and 30 km s-1 in line-of-sight velocity. Deep optical images reveal tidal features indicative of close interactions. Despite the small projected distances, the quasar sightline shows little absorption associated with the galaxy pair with a total H I column density no greater than log N({H I})/cm^{-2}=13.65. This limiting H I column density is already two orders of magnitude less than what is expected from previous halo gas studies. In addition, we detect no heavy-element absorption features associated with the galaxy pair with 3σ limits of log N({Mg II})/cm^{-2} < 12.2 and log N({O VI})/cm^{-2} < 13.7. The probability of seeing such little absorption in a sightline passing at a small projected distance from two non-interacting galaxies is 0.2 per cent. The absence of strong absorbers near the close galaxy pair suggests that the cool gas reservoirs of the galaxies have been significantly depleted by the galaxy interaction. These observations therefore underscore the potential impact of galaxy interactions on the gaseous haloes around galaxies.

  14. Factors Limiting Post-logging Seedling Regeneration by Big-leaf Mahogany (Swietenia macrophylla) in Southeastern Amazonia, Brazil, and Implications for Sustainable Management

    Treesearch

    James Grogan; Jurandir Galvao

    2006-01-01

    Post-logging seedling regeneration density by big-leaf mahogany (Swietenia macrophylla), a nonpioneer light-demanding timber species, is generally reported to be low to nonexistent. To investigate factors limiting seedling density following logging within the study region, we quantified seed production rates, germinability, dispersal patterns, and seed fates on the...

  15. Image-based 3D modeling study of the influence of vessel density and blood hemoglobin concentration on tumor oxygenation and response to irradiation.

    PubMed

    Lagerlöf, Jakob H; Kindblom, Jon; Cortez, Eliane; Pietras, Kristian; Bernhardt, Peter

    2013-02-01

    Hypoxia is one of the most important factors influencing clinical outcome after radiotherapy. Improved knowledge of factors affecting the levels and distribution of oxygen within a tumor is needed. The authors constructed a theoretical 3D model based on histological images to analyze the influence of vessel density and hemoglobin (Hb) concentration on the response to irradiation. The pancreases of a Rip-Tag2 mouse, a model of malignant insulinoma, were excised, cryosectioned, immunostained, and photographed. Vessels were identified by image thresholding and a 3D vessel matrix assembled. The matrix was reduced to functional vessel segments and enlarged by replication. The steady-state oxygen tension field of the tumor was calculated by iteratively employing Green's function method for diffusion and the Michaelis-Menten model for consumption. The impact of vessel density on the radiation response was studied by removing a number of randomly selected vessels. The impact of Hb concentration was studied by independently changing vessel oxygen partial pressure (pO(2)). For each oxygen distribution, the oxygen enhancement ratio (OER) was calculated and the mean absorbed dose at which the tumor control probability (TCP) was 0.99 (D(99)) was determined using the linear-quadratic cell survival model (LQ model). Decreased pO(2) shifted the oxygen distribution to lower values, whereas decreased vessel density caused the distribution to widen and shift to lower values. Combined scenarios caused lower-shifted distributions, emphasising log-normal characteristics. Vessel reduction combined with increased blood pO(2) caused the distribution to widen due to a lack of vessels. The most pronounced radiation effect of increased pO(2) occurred with tumor tissue with 50% of the maximum vessel density used in the simulations. A 51% decrease in D(99), from 123 to 60 Gy, was found between the lowest and highest pO(2) concentrations. Our results indicate that an intermediate vascular density region exists where enhanced blood oxygen concentration may be beneficial for radiation response. The results also suggest that it is possible to distinguish between diffusion-limited and anemic hypoxia from the characteristics of the pO(2) distribution.

  16. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  17. Estimating the carbon in coarse woody debris with perpendicular distance sampling. Chapter 6

    Treesearch

    Harry T. Valentine; Jeffrey H. Gove; Mark J. Ducey; Timothy G. Gregoire; Michael S. Williams

    2008-01-01

    Perpendicular distance sampling (PDS) is a design for sampling the population of pieces of coarse woody debris (logs) in a forested tract. In application, logs are selected at sample points with probability proportional to volume. Consequently, aggregate log volume per unit land area can be estimated from tallies of logs at sample points. In this chapter we provide...

  18. Using satellite remote sensing to model and map the distribution of Bicknell's thrush (Catharus bicknelli) in the White Mountains of New Hampshire

    NASA Astrophysics Data System (ADS)

    Hale, Stephen Roy

    Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.

  19. Compaction of forest soil by logging machinery favours occurrence of prokaryotes.

    PubMed

    Schnurr-Pütz, Silvia; Bååth, Erland; Guggenberger, Georg; Drake, Harold L; Küsel, Kirsten

    2006-12-01

    Soil compaction caused by passage of logging machinery reduces the soil air capacity. Changed abiotic factors might induce a change in the soil microbial community and favour organisms capable of tolerating anoxic conditions. The goals of this study were to resolve differences between soil microbial communities obtained from wheel-tracks (i.e. compacted) and their adjacent undisturbed sites, and to evaluate differences in potential anaerobic microbial activities of these contrasting soils. Soil samples obtained from compacted soil had a greater bulk density and a higher pH than uncompacted soil. Analyses of phospholipid fatty acids demonstrated that the eukaryotic/prokaryotic ratio in compacted soils was lower than that of uncompacted soils, suggesting that fungi were not favoured by the in situ conditions produced by compaction. Indeed, most-probable-number (MPN) estimates of nitrous oxide-producing denitrifiers, acetate- and lactate-utilizing iron and sulfate reducers, and methanogens were higher in compacted than in uncompacted soils obtained from one site that had large differences in bulk density. Compacted soils from this site yielded higher iron-reducing, sulfate-reducing and methanogenic potentials than did uncompacted soils. MPN estimates of H2-utilizing acetogens in compacted and uncompacted soils were similar. These results indicate that compaction of forest soil alters the structure and function of the soil microbial community and favours occurrence of prokaryotes.

  20. Fluid overpressure estimates from the aspect ratios of mineral veins

    NASA Astrophysics Data System (ADS)

    Philipp, Sonja L.

    2012-12-01

    Several hundred calcite veins and (mostly) normal faults were studied in limestone and shale layers of a Mesozoic sedimentary basin next to the village of Kilve at the Bristol Channel (SW-England). The veins strike mostly E-W (239 measurements), that is, parallel with the associated normal faults. The mean vein dip is 73°N (44 measurements). Field observations indicate that these faults transported the fluids up into the limestone layers. The vein outcrop (trace) length (0.025-10.3 m) and thickness (0.1-28 mm) size distributions are log-normal. Taking the thickness as the dependent variable and the outcrop length as the independent variable, linear regression gives a coefficient of determination (goodness of fit) of R2 = 0.74 (significant with 99% confidence), but natural logarithmic transformation of the thickness-length data increases the coefficient of determination to R2 = 0.98, indicating that nearly all the variation in thickness can be explained in terms of variation in trace length. The geometric mean of the aspect (length/thickness) ratio, 451, gives the best representation of the data set. With 95% confidence, the true geometric mean of the aspect ratios of the veins lies in the interval 409-497. Using elastic crack theory, appropriate elastic properties of the host rock, and the mean aspect ratio, the fluid overpressure (that is, the total fluid pressure minus the normal stress on the fracture plane) at the time of vein formation is estimated at around 18 MPa. From these results, and using the average host rock and water densities, the depth to the sources of the fluids (below the present exposures) forming the veins is estimated at between around 300 m and 1200 m. These results are in agreement to those obtained by independent isotopic studies and indicate that the fluids were of rather local origin, probably injected from sill-like sources (water sills) inside the sedimentary basin.

  1. Rapid changes in ice core gas records - Part 1: On the accuracy of methane synchronisation of ice cores

    NASA Astrophysics Data System (ADS)

    Köhler, P.

    2010-08-01

    Methane synchronisation is a concept to align ice core records during rapid climate changes of the Dansgaard/Oeschger (D/O) events onto a common age scale. However, atmospheric gases are recorded in ice cores with a log-normal-shaped age distribution probability density function, whose exact shape depends mainly on the accumulation rate on the drilling site. This age distribution effectively shifts the mid-transition points of rapid changes in CH4 measured in situ in ice by about 58% of the width of the age distribution with respect to the atmospheric signal. A minimum dating uncertainty, or artefact, in the CH4 synchronisation is therefore embedded in the concept itself, which was not accounted for in previous error estimates. This synchronisation artefact between Greenland and Antarctic ice cores is for GRIP and Byrd less than 40 years, well within the dating uncertainty of CH4, and therefore does not calls the overall concept of the bipolar seesaw into question. However, if the EPICA Dome C ice core is aligned via CH4 to NGRIP this synchronisation artefact is in the most recent unified ice core age scale (Lemieux-Dudon et al., 2010) for LGM climate conditions of the order of three centuries and might need consideration in future gas chronologies.

  2. Effects of local extinction on mixture fraction and scalar dissipation statistics in turbulent nonpremixed flames

    NASA Astrophysics Data System (ADS)

    Attili, Antonio; Bisetti, Fabrizio

    2015-11-01

    Passive scalar and scalar dissipation statistics are investigated in a set of flames achieving a Taylor's scale Reynolds number in the range 100 <=Reλ <= 150 [Attili et al. Comb. Flame 161, 2014; Attili et al. Proc. Comb. Inst. 35, 2015]. The three flames simulated show an increasing level of extinction due to the decrease of the Damköhler number. In the case of negligible extinction, the non-dimensional scalar dissipation is expected to be the same in the three cases. In the present case, the deviations from the aforementioned self-similarity manifests itself as a decrease of the non-dimensional scalar dissipation for increasing level of local extinction, in agreement with recent experiments [Karpetis and Barlow Proc. Comb. Inst. 30, 2005; Sutton and Driscoll Combust. Flame 160, 2013 ]. This is caused by the decrease of molecular diffusion due to the lower temperature in the low Damköhler number cases. Probability density functions of the scalar dissipation χ show rather strong deviations from the log-normal distribution. The left tail of the pdf scales as χ 1 / 2 while the right tail scales as e-cχα, in agreement with results for incompressible turbulence [Schumacher et al. J. Fluid Mech. 531, 2005].

  3. Bridging stylized facts in finance and data non-stationarities

    NASA Astrophysics Data System (ADS)

    Camargo, Sabrina; Duarte Queirós, Sílvio M.; Anteneodo, Celia

    2013-04-01

    Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary paths of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of trading volume and price fluctuations and a dynamics compatible with the U-shaped profile of the volume in a trading section and the slow decay of the autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.

  4. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  5. Selective logging: does the imprint remain on tree structure and composition after 45 years?

    PubMed

    Osazuwa-Peters, Oyomoare L; Chapman, Colin A; Zanne, Amy E

    2015-01-01

    Selective logging of tropical forests is increasing in extent and intensity. The duration over which impacts of selective logging persist, however, remains an unresolved question, particularly for African forests. Here, we investigate the extent to which a past selective logging event continues to leave its imprint on different components of an East African forest 45 years later. We inventoried 2358 stems ≥10 cm in diameter in 26 plots (200 m × 10 m) within a 5.2 ha area in Kibale National Park, Uganda, in logged and unlogged forest. In these surveys, we characterized the forest light environment, taxonomic composition, functional trait composition using three traits (wood density, maximum height and maximum diameter) and forest structure based on three measures (stem density, total basal area and total above-ground biomass). In comparison to unlogged forests, selectively logged forest plots in Kibale National Park on average had higher light levels, different structure characterized by lower stem density, lower total basal area and lower above-ground biomass, and a distinct taxonomic composition driven primarily by changes in the relative abundance of species. Conversely, selectively logged forest plots were like unlogged plots in functional composition, having similar community-weighted mean values for wood density, maximum height and maximum diameter. This similarity in functional composition irrespective of logging history may be due to functional recovery of logged forest or background changes in functional attributes of unlogged forest. Despite the passage of 45 years, the legacy of selective logging on the tree community in Kibale National Park is still evident, as indicated by distinct taxonomic and structural composition and reduced carbon storage in logged forest compared with unlogged forest. The effects of selective logging are exerted via influences on tree demography rather than functional trait composition.

  6. Selective logging: does the imprint remain on tree structure and composition after 45 years?

    PubMed Central

    Osazuwa-Peters, Oyomoare L.; Chapman, Colin A.; Zanne, Amy E.

    2015-01-01

    Selective logging of tropical forests is increasing in extent and intensity. The duration over which impacts of selective logging persist, however, remains an unresolved question, particularly for African forests. Here, we investigate the extent to which a past selective logging event continues to leave its imprint on different components of an East African forest 45 years later. We inventoried 2358 stems ≥10 cm in diameter in 26 plots (200 m × 10 m) within a 5.2 ha area in Kibale National Park, Uganda, in logged and unlogged forest. In these surveys, we characterized the forest light environment, taxonomic composition, functional trait composition using three traits (wood density, maximum height and maximum diameter) and forest structure based on three measures (stem density, total basal area and total above-ground biomass). In comparison to unlogged forests, selectively logged forest plots in Kibale National Park on average had higher light levels, different structure characterized by lower stem density, lower total basal area and lower above-ground biomass, and a distinct taxonomic composition driven primarily by changes in the relative abundance of species. Conversely, selectively logged forest plots were like unlogged plots in functional composition, having similar community-weighted mean values for wood density, maximum height and maximum diameter. This similarity in functional composition irrespective of logging history may be due to functional recovery of logged forest or background changes in functional attributes of unlogged forest. Despite the passage of 45 years, the legacy of selective logging on the tree community in Kibale National Park is still evident, as indicated by distinct taxonomic and structural composition and reduced carbon storage in logged forest compared with unlogged forest. The effects of selective logging are exerted via influences on tree demography rather than functional trait composition. PMID:27293697

  7. Effects of selective logging on bat communities in the southeastern Amazon.

    PubMed

    Peters, Sandra L; Malcolm, Jay R; Zimmerman, Barbara L

    2006-10-01

    Although extensive areas of tropical forest are selectively logged each year, the responses of bat communities to this form of disturbance have rarely been examined. Our objectives were to (1) compare bat abundance, species composition, and feeding guild structure between unlogged and low-intensity selectively logged (1-4 logged stems/ha) sampling grids in the southeastern Amazon and (2) examine correlations between logging-induced changes in bat communities and forest structure. We captured bats in understory and canopy mist nets set in five 1-ha study grids in both logged and unlogged forest. We captured 996 individuals, representing 5 families, 32 genera, and 49 species. Abundances of nectarivorous and frugivorous taxa (Glossophaginae, Lonchophyllinae, Stenodermatinae, and Carolliinae) were higher at logged sites, where canopy openness and understory foliage density were greatest. In contrast, insectivorous and omnivorous species (Emballonuridae, Mormoopidae, Phyllostominae, and Vespertilionidae) were more abundant in unlogged sites, where canopy foliage density and variability in the understory stratum were greatest. Multivariate analyses indicated that understory bat species composition differed strongly between logged and unlogged sites but provided little evidence of logging effects for the canopy fauna. Different responses among feeding guilds and taxonomic groups appeared to be related to foraging and echolocation strategies and to changes in canopy cover and understory foliage densities. Our results suggest that even low-intensity logging modifies habitat structure, leading to changes in bat species composition.

  8. Logging-related increases in stream density in a northern California watershed

    Treesearch

    Matthew S. Buffleben

    2012-01-01

    Although many sediment budgets estimate the effects of logging, few have considered the potential impact of timber harvesting on stream density. Failure to consider changes in stream density could lead to large errors in the sediment budget, particularly between the allocation of natural and anthropogenic sources of sediment.This study...

  9. On the constancy of the lunar cratering flux over the past 3.3 billion yr

    NASA Technical Reports Server (NTRS)

    Guinness, E. A.; Arvidson, R. E.

    1977-01-01

    Utilizing a method that minimizes random fluctuations in sampling crater populations, it can be shown that the ejecta deposit of Tycho, the floor of Copernicus, and the region surrounding the Apollo 12 landing site have incremental crater size-frequency distributions that can be expressed as log-log linear functions over the diameter range from 0.1 to 1 km. Slopes are indistinguishable for the three populations, probably indicating that the surfaces are dominated by primary craters. Treating the crater populations of Tycho, the floor of Copernicus, and Apollo 12 as primary crater populations contaminated, but not overwhelmed, with secondaries, allows an attempt at calibration of the post-heavy bombardment cratering flux. Using the age of Tycho as 109 m.y., Copernicus as 800 m.y., and Apollo 12 as 3.26 billion yr, there is no basis for assuming that the flux has changed over the past 3.3 billion yr. This result can be used for dating intermediate aged surfaces by crater density.

  10. Effects of management practices on yield and quality of milk from smallholder dairy units in urban and peri-urban Morogoro, Tanzania.

    PubMed

    Gillah, Kejeri A; Kifaro, George C; Madsen, Jorgen

    2014-10-01

    A longitudinal study design was used to assess the management, chemical composition of cows' milk and quantify the microbial load of raw milk produced at farm level. Data were collected between December 2010 and September 2011 in Morogoro municipality. Milk samples were collected once every month and analysed for butter fat (BF), crude protein (CP), total solids (TS) and solids non-fat (SNF). Total bacterial count (TBC) and coliform counts (CC) were normalized by log transformation. The average milk yield was 7.0 l/day and was not influenced by feeding systems and breeds. Dairy cows owned by people who had no regular income produced more milk than government employees and retired officers. Means of BF, TS, SNF and CP were similar in different feeding systems. Wet season had significantly higher TBC (5.9 log10 cfu/ml) and CC (2.4 log10 cfu/ml) but feeding systems had no effect. Stocking density influenced TBC but not CC. It can be concluded that dairy cows produced low milk yield and its quality was poor.

  11. The probability distribution model of air pollution index and its dominants in Kuala Lumpur

    NASA Astrophysics Data System (ADS)

    AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah

    2016-11-01

    This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.

  12. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  13. The Spontaneous Ray Log: A New Aid for Constructing Pseudo-Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Quadir, Adnan; Lewis, Charles; Rau, Ruey-Juin

    2018-02-01

    Conventional synthetic seismograms for hydrocarbon exploration combine the sonic and density logs, whereas pseudo-synthetic seismograms are constructed with a density log plus a resistivity, neutron, gamma ray, or rarely a spontaneous potential log. Herein, we introduce a new technique for constructing a pseudo-synthetic seismogram by combining the gamma ray (GR) and self-potential (SP) logs to produce the spontaneous ray (SR) log. Three wells, each of which consisted of more than 1000 m of carbonates, sandstones, and shales, were investigated; each well was divided into 12 Groups based on formation tops, and the Pearson product-moment correlation coefficient (PCC) was calculated for each "Group" from each of the GR, SP, and SR logs. The highest PCC-valued log curves for each Group were then combined to produce a single log whose values were cross-plotted against the reference well's sonic ITT values to determine a linear transform for producing a pseudo-sonic (PS) log and, ultimately, a pseudo-synthetic seismogram. The range for the Nash-Sutcliffe efficiency (NSE) acceptable value for the pseudo-sonic logs of three wells was 78-83%. This technique was tested on three wells, one of which was used as a blind test well, with satisfactory results. The PCC value between the composite PS (SR) log with low-density correction and the conventional sonic (CS) log was 86%. Because of the common occurrence of spontaneous potential and gamma ray logs in many of the hydrocarbon basins of the world, this inexpensive and straightforward technique could hold significant promise in areas that are in need of alternate ways to create pseudo-synthetic seismograms for seismic reflection interpretation.

  14. TRPM7 Is Required for Normal Synapse Density, Learning, and Memory at Different Developmental Stages.

    PubMed

    Liu, Yuqiang; Chen, Cui; Liu, Yunlong; Li, Wei; Wang, Zhihong; Sun, Qifeng; Zhou, Hang; Chen, Xiangjun; Yu, Yongchun; Wang, Yun; Abumaria, Nashat

    2018-06-19

    The TRPM7 chanzyme contributes to several biological and pathological processes in different tissues. However, its role in the CNS under physiological conditions remains unclear. Here, we show that TRPM7 knockdown in hippocampal neurons reduces structural synapse density. The synapse density is rescued by the α-kinase domain in the C terminus but not by the ion channel region of TRPM7 or by increasing extracellular concentrations of Mg 2+ or Zn 2+ . Early postnatal conditional knockout of TRPM7 in mice impairs learning and memory and reduces synapse density and plasticity. TRPM7 knockdown in the hippocampus of adult rats also impairs learning and memory and reduces synapse density and synaptic plasticity. In knockout mice, restoring expression of the α-kinase domain in the brain rescues synapse density/plasticity and memory, probably by interacting with and phosphorylating cofilin. These results suggest that brain TRPM7 is important for having normal synaptic and cognitive functions under physiological, non-pathological conditions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  16. Behavioral Analysis of Visitors to a Medical Institution's Website Using Markov Chain Monte Carlo Methods.

    PubMed

    Suzuki, Teppei; Tani, Yuji; Ogasawara, Katsuhiko

    2016-07-25

    Consistent with the "attention, interest, desire, memory, action" (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. In the case of the keyword "clinic name," the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword "clinic name and regional name," the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword "clinic name + medical examination," the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords "mammography screening," the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis.

  17. Behavioral Analysis of Visitors to a Medical Institution’s Website Using Markov Chain Monte Carlo Methods

    PubMed Central

    Tani, Yuji

    2016-01-01

    Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword “clinic name and regional name,” the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword “clinic name + medical examination,” the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords “mammography screening,” the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Conclusions Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis. PMID:27457537

  18. Empirical relationships between tree fall and landscape-level amounts of logging and fire

    PubMed Central

    Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C.

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape. PMID:29474487

  19. Empirical relationships between tree fall and landscape-level amounts of logging and fire.

    PubMed

    Lindenmayer, David B; Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape.

  20. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  1. Statistical Analysis of an Infrared Thermography Inspection of Reinforced Carbon-Carbon

    NASA Technical Reports Server (NTRS)

    Comeaux, Kayla

    2011-01-01

    Each piece of flight hardware being used on the shuttle must be analyzed and pass NASA requirements before the shuttle is ready for launch. One tool used to detect cracks that lie within flight hardware is Infrared Flash Thermography. This is a non-destructive testing technique which uses an intense flash of light to heat up the surface of a material after which an Infrared camera is used to record the cooling of the material. Since cracks within the material obstruct the natural heat flow through the material, they are visible when viewing the data from the Infrared camera. We used Ecotherm, a software program, to collect data pertaining to the delaminations and analyzed the data using Ecotherm and University of Dayton Log Logistic Probability of Detection (POD) Software. The goal was to reproduce the statistical analysis produced by the University of Dayton software, by using scatter plots, log transforms, and residuals to test the assumption of normality for the residuals.

  2. Statistical distributions of ultra-low dose CT sinograms and their fundamental limits

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream

  3. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  4. Diffuse reflection from a stochastically bounded, semi-infinite medium

    NASA Technical Reports Server (NTRS)

    Lumme, K.; Peltoniemi, J. I.; Irvine, W. M.

    1990-01-01

    In order to determine the diffuse reflection from a medium bounded by a rough surface, the problem of radiative transfer in a boundary layer characterized by a statistical distribution of heights is considered. For the case that the surface is defined by a multivariate normal probability density, the propagation probability for rays traversing the boundary layer is derived and, from that probability, a corresponding radiative transfer equation. A solution of the Eddington (two stream) type is found explicitly, and examples are given. The results should be applicable to reflection from the regoliths of solar system bodies, as well as from a rough ocean surface.

  5. Quantifying nonstationary radioactivity concentration fluctuations near Chernobyl: A complete statistical description

    NASA Astrophysics Data System (ADS)

    Viswanathan, G. M.; Buldyrev, S. V.; Garger, E. K.; Kashpur, V. A.; Lucena, L. S.; Shlyakhter, A.; Stanley, H. E.; Tschiersch, J.

    2000-09-01

    We analyze nonstationary 137Cs atmospheric activity concentration fluctuations measured near Chernobyl after the 1986 disaster and find three new results: (i) the histogram of fluctuations is well described by a log-normal distribution; (ii) there is a pronounced spectral component with period T=1yr, and (iii) the fluctuations are long-range correlated. These findings allow us to quantify two fundamental statistical properties of the data: the probability distribution and the correlation properties of the time series. We interpret our findings as evidence that the atmospheric radionuclide resuspension processes are tightly coupled to the surrounding ecosystems and to large time scale weather patterns.

  6. Entanglement transitions induced by large deviations

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  7. Entanglement transitions induced by large deviations.

    PubMed

    Bhosale, Udaysinh T

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B, is computed analytically using a Coulomb gas method. It is shown that this probability, for large N, goes as exp[-βN^{2}Φ(ζ)], where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ(ζ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A, using the properties of the density matrix's partial transpose ρ_{12}^{Γ}. The density of states of ρ_{12}^{Γ} is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ. Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  8. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  9. Analysis of Electronic Densities and Integrated Doses in Multiform Glioblastomas Stereotactic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baron-Aznar, C.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScan(c) software, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.

  10. Petrophysical analysis of geophysical logs of the National Drilling Company-U.S. Geological Survey ground-water research project for Abu Dhabi Emirate, United Arab Emirates

    USGS Publications Warehouse

    Jorgensen, Donald G.; Petricola, Mario

    1994-01-01

    A program of borehole-geophysical logging was implemented to supply geologic and geohydrologic information for a regional ground-water investigation of Abu Dhabi Emirate. Analysis of geophysical logs was essential to provide information on geohydrologic properties because drill cuttings were not always adequate to define lithologic boundaries. The standard suite of logs obtained at most project test holes consisted of caliper, spontaneous potential, gamma ray, dual induction, microresistivity, compensated neutron, compensated density, and compensated sonic. Ophiolitic detritus from the nearby Oman Mountains has unusual petrophysical properties that complicated the interpretation of geophysical logs. The density of coarse ophiolitic detritus is typically greater than 3.0 grams per cubic centimeter, porosity values are large, often exceeding 45 percent, and the clay fraction included unusual clays, such as lizardite. Neither the spontaneous-potential log nor the natural gamma-ray log were useable clay indicators. Because intrinsic permeability is a function of clay content, additional research in determining clay content was critical. A research program of geophysical logging was conducted to determine the petrophysical properties of the shallow subsurface formations. The logging included spectral-gamma and thermal-decay-time logs. These logs, along with the standard geophysical logs, were correlated to mineralogy and whole-rock chemistry as determined from sidewall cores. Thus, interpretation of lithology and fluids was accomplished. Permeability and specific yield were calculated from geophysical-log data and correlated to results from an aquifer test. On the basis of results from the research logging, a method of lithologic and water-resistivity interpretation was developed for the test holes at which the standard suite of logs were obtained. In addition, a computer program was developed to assist in the analysis of log data. Geohydrologic properties were estimated, including volume of clay matrix, volume of matrix other than clay, density of matrix other than clay, density of matrix, intrinsic permeability, specific yield, and specific storage. Geophysical logs were used to (1) determine lithology, (2) correlate lithologic and permeable zones, (3) calibrate seismic reprocessing, (4) calibrate transient-electromagnetic surveys, and (5) calibrate uphole-survey interpretations. Logs were used at the drill site to (1) determine permeability zones, (2) determine dissolved-solids content, which is a function of water resistivity, and (3) design wells accordingly. Data and properties derived from logs were used to determine transmissivity and specific yield of aquifer materials.

  11. Globular cluster seeding by primordial black hole population

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgov, A.; Postnov, K., E-mail: dolgov@fe.infn.it, E-mail: kpostnov@gmail.com

    Primordial black holes (PBHs) that form in the early Universe in the modified Affleck-Dine (AD) mechanism of baryogenesis should have intrinsic log-normal mass distribution of PBHs. We show that the parameters of this distribution adjusted to provide the required spatial density of massive seeds (≥ 10{sup 4} M {sub ⊙}) for early galaxy formation and not violating the dark matter density constraints, predict the existence of the population of intermediate-mass PBHs with a number density of 0∼ 100 Mpc{sup −3}. We argue that the population of intermediate-mass AD PBHs can also seed the formation of globular clusters in galaxies. Inmore » this scenario, each globular cluster should host an intermediate-mass black hole with a mass of a few thousand solar masses, and should not obligatorily be immersed in a massive dark matter halo.« less

  12. Statistical process control for residential treated wood

    Treesearch

    Patricia K. Lebow; Timothy M. Young; Stan Lebow

    2017-01-01

    This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...

  13. Eliminating the rugosity effect from compensated density logs by geometrical response matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaum, C.; Holenka, J.M.; Case, C.R.

    1991-06-01

    A theoretical and experimental effort to understand the effects of borehole rugosity on individual detector responses yielded an improved method of processing compensated density logs. Historically, the spine/ribs technique for obtaining borehole and mudcake compensation of dual-detector, gamma-gamma density logs has been very successful as long as the borehole and other environmental effects vary slowly with depth and the interest in limited to vertical features broader than several feet. With the increased interest in higher vertical resolution, a more detailed analysis of the effect of such quickly varying environmental effects as rugosity was required. A laboratory setup simulating the effectmore » of rugosity on Schlumberger Litho-Density{sup SM} tools (LDT) was used to study vertical response in the presence of rugosity. The data served as the benchmark for the Nonte Carlo models used to generate synthetic density logs in the presence of more complex rugosity patterns. The results provided in this paper show that proper matching of the two detector responses before application of conventional compensation methods can eliminate rugosity effects without degrading the measurements vertical resolution. The accuracy of the results is a good as the obtained in a parallel mudcake or standoff with the conventional method. Application to both field and synthetic log confirmed the validity of these results.« less

  14. Wavelength-normalized spectroscopic analysis of Staphylococcus aureus and Pseudomonas aeruginosa growth rates.

    PubMed

    McBirney, Samantha E; Trinh, Kristy; Wong-Beringer, Annie; Armani, Andrea M

    2016-10-01

    Optical density (OD) measurements are the standard approach used in microbiology for characterizing bacteria concentrations in culture media. OD is based on measuring the optical absorbance of a sample at a single wavelength, and any error will propagate through all calculations, leading to reproducibility issues. Here, we use the conventional OD technique to measure the growth rates of two different species of bacteria, Pseudomonas aeruginosa and Staphylococcus aureus. The same samples are also analyzed over the entire UV-Vis wavelength spectrum, allowing a distinctly different strategy for data analysis to be performed. Specifically, instead of only analyzing a single wavelength, a multi-wavelength normalization process is implemented. When the OD method is used, the detected signal does not follow the log growth curve. In contrast, the multi-wavelength normalization process minimizes the impact of bacteria byproducts and environmental noise on the signal, thereby accurately quantifying growth rates with high fidelity at low concentrations.

  15. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  16. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  17. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  18. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  19. Escherichia coli bacteria density in relation to turbidity, streamflow characteristics, and season in the Chattahoochee River near Atlanta, Georgia, October 2000 through September 2008—Description, statistical analysis, and predictive modeling

    USGS Publications Warehouse

    Lawrence, Stephen J.

    2012-01-01

    Regression analyses show that E. coli density in samples was strongly related to turbidity, streamflow characteristics, and season at both sites. The regression equation chosen for the Norcross data showed that 78 percent of the variability in E. coli density (in log base 10 units) was explained by the variability in turbidity values (in log base 10 units), streamflow event (dry-weather flow or stormflow), season (cool or warm), and an interaction term that is the cross product of streamflow event and turbidity. The regression equation chosen for the Atlanta data showed that 76 percent of the variability in E. coli density (in log base 10 units) was explained by the variability in turbidity values (in log base 10 units), water temperature, streamflow event, and an interaction term that is the cross product of streamflow event and turbidity. Residual analysis and model confirmation using new data indicated the regression equations selected at both sites predicted E. coli density within the 90 percent prediction intervals of the equations and could be used to predict E. coli density in real time at both sites.

  20. Morphometric analyses of hominoid crania, probabilities of conspecificity and an approximation of a biological species constant.

    PubMed

    Thackeray, J F; Dykes, S

    2016-02-01

    Thackeray has previously explored the possibility of using a morphometric approach to quantify the "amount" of variation within species and to assess probabilities of conspecificity when two fossil specimens are compared, instead of "pigeon-holing" them into discrete species. In an attempt to obtain a statistical (probabilistic) definition of a species, Thackeray has recognized an approximation of a biological species constant (T=-1.61) based on the log-transformed standard error of the coefficient m (log sem) in regression analysis of cranial and other data from pairs of specimens of conspecific extant species, associated with regression equations of the form y=mx+c where m is the slope and c is the intercept, using measurements of any specimen A (x axis), and any specimen B of the same species (y axis). The log-transformed standard error of the co-efficient m (log sem) is a measure of the degree of similarity between pairs of specimens, and in this study shows central tendency around a mean value of -1.61 and standard deviation 0.10 for modern conspecific specimens. In this paper we focus attention on the need to take into account the range of difference in log sem values (Δlog sem or "delta log sem") obtained from comparisons when specimen A (x axis) is compared to B (y axis), and secondly when specimen A (y axis) is compared to B (x axis). Thackeray's approach can be refined to focus on high probabilities of conspecificity for pairs of specimens for which log sem is less than -1.61 and for which Δlog sem is less than 0.03. We appeal for the adoption of a concept here called "sigma taxonomy" (as opposed to "alpha taxonomy"), recognizing that boundaries between species are not always well defined. Copyright © 2015 Elsevier GmbH. All rights reserved.

  1. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  2. In vivo NMR imaging of sodium-23 in the human head.

    PubMed

    Hilal, S K; Maudsley, A A; Ra, J B; Simon, H E; Roschmann, P; Wittekoek, S; Cho, Z H; Mun, S K

    1985-01-01

    We report the first clinical nuclear magnetic resonance (NMR) images of cerebral sodium distribution in normal volunteers and in patients with a variety of pathological lesions. We have used a 1.5 T NMR magnet system. When compared with proton distribution, sodium shows a greater variation in its concentration from tissue to tissue and from normal to pathological conditions. Image contrast calculated on the basis of sodium concentration is 7 to 18 times greater than that of proton spin density. Normal images emphasize the extracellular compartments. In the clinical studies, areas of recent or old cerebral infarction and tumors show a pronounced increase of sodium content (300-400%). Actual measurements of image density values indicate that there is probably a further accentuation of the contrast by the increased "NMR visibility" of sodium in infarcted tissue. Sodium imaging may prove to be a more sensitive means for early detection of some brain disorders than other imaging methods.

  3. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, M; Eickhoff, J; Perlman, S

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less

  4. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  5. Electrofacies analysis for coal lithotype profiling based on high-resolution wireline log data

    NASA Astrophysics Data System (ADS)

    Roslin, A.; Esterle, J. S.

    2016-06-01

    The traditional approach to coal lithotype analysis is based on a visual characterisation of coal in core, mine or outcrop exposures. As not all wells are fully cored, the petroleum and coal mining industries increasingly use geophysical wireline logs for lithology interpretation.This study demonstrates a method for interpreting coal lithotypes from geophysical wireline logs, and in particular discriminating between bright or banded, and dull coal at similar densities to a decimetre level. The study explores the optimum combination of geophysical log suites for training the coal electrofacies interpretation, using neural network conception, and then propagating the results to wells with fewer wireline data. This approach is objective and has a recordable reproducibility and rule set.In addition to conventional gamma ray and density logs, laterolog resistivity, microresistivity and PEF data were used in the study. Array resistivity data from a compact micro imager (CMI tool) were processed into a single microresistivity curve and integrated with the conventional resistivity data in the cluster analysis. Microresistivity data were tested in the analysis to test the hypothesis that the improved vertical resolution of microresistivity curve can enhance the accuracy of the clustering analysis. The addition of PEF log allowed discrimination between low density bright to banded coal electrofacies and low density inertinite-rich dull electrofacies.The results of clustering analysis were validated statistically and the results of the electrofacies results were compared to manually derived coal lithotype logs.

  6. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  7. The ATLASGAL survey: distribution of cold dust in the Galactic plane. Combination with Planck data

    NASA Astrophysics Data System (ADS)

    Csengeri, T.; Weiss, A.; Wyrowski, F.; Menten, K. M.; Urquhart, J. S.; Leurini, S.; Schuller, F.; Beuther, H.; Bontemps, S.; Bronfman, L.; Henning, Th.; Schneider, N.

    2016-01-01

    Context. Sensitive ground-based submillimeter surveys, such as ATLASGAL, provide a global view on the distribution of cold dense gas in the Galactic plane at up to two-times better angular-resolution compared to recent space-based surveys with Herschel. However, a drawback of ground-based continuum observations is that they intrinsically filter emission, at angular scales larger than a fraction of the field-of-view of the array, when subtracting the sky noise in the data processing. The lost information on the distribution of diffuse emission can be, however, recovered from space-based, all-sky surveys with Planck. Aims: Here we aim to demonstrate how this information can be used to complement ground-based bolometer data and present reprocessed maps of the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey. Methods: We use the maps at 353 GHz from the Planck/HFI instrument, which performed a high sensitivity all-sky survey at a frequency close to that of the APEX/LABOCA array, which is centred on 345 GHz. Complementing the ground-based observations with information on larger angular scales, the resulting maps reveal the distribution of cold dust in the inner Galaxy with a larger spatial dynamic range. We visually describe the observed features and assess the global properties of dust distribution. Results: Adding information from large angular scales helps to better identify the global properties of the cold Galactic interstellar medium. To illustrate this, we provide mass estimates from the dust towards the W43 star-forming region and estimate a column density contrast of at least a factor of five between a low intensity halo and the star-forming ridge. We also show examples of elongated structures extending over angular scales of 0.5°, which we refer to as thin giant filaments. Corresponding to > 30 pc structures in projection at a distance of 3 kpc, these dust lanes are very extended and show large aspect ratios. We assess the fraction of dense gas by determining the contribution of the APEX/LABOCA maps to the combined maps, and estimate 2-5% for the dense gas fraction (corresponding to Av> 7 mag) on average in the Galactic plane. We also show probability distribution functions of the column density (N-PDF), which reveal the typically observed log-normal distribution for low column density and exhibit an excess at high column densities. As a reference for extragalactic studies, we show the line-of-sight integrated N-PDF of the inner Galaxy, and derive a contribution of this excess to the total column density of ~ 2.2%, corresponding to NH2 = 2.92 × 1022 cm-2. Taking the total flux density observed in the maps, we provide an independent estimate of the mass of molecular gas in the inner Galaxy of ~ 1 × 109 M⊙, which is consistent with previous estimates using CO emission. From the mass and dense gas fraction (fDG), we estimate a Galactic SFR of Ṁ = 1.3 M⊙ yr-1. Conclusions: Retrieving the extended emission helps to better identify massive giant filaments which are elongated and confined structures. We show that the log-normal distribution of low column density gas is ubiquitous in the inner Galaxy. While the distribution of diffuse gas is relatively homogenous in the inner Galaxy, the central molecular zone (CMZ) stands out with a higher dense gas fraction despite its low star formation efficiency.Altogether our findings explain well the observed low star formation efficiency of the Milky Way by the low fDG in the Galactic ISM. In contrast, the high fDG observed towards the CMZ, despite its low star formation activity, suggests that, in that particular region of our Galaxy, high density gas is not the bottleneck for star formation.

  8. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  9. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  10. Statistical characterization of thermal plumes in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng-Qi; Xie, Yi-Chao; Sun, Chao; Xia, Ke-Qing

    2016-09-01

    We report an experimental study on the statistical properties of the thermal plumes in turbulent thermal convection. A method has been proposed to extract the basic characteristics of thermal plumes from temporal temperature measurement inside the convection cell. It has been found that both plume amplitude A and cap width w , in a time domain, are approximately in the log-normal distribution. In particular, the normalized most probable front width is found to be a characteristic scale of thermal plumes, which is much larger than the thermal boundary layer thickness. Over a wide range of the Rayleigh number, the statistical characterizations of the thermal fluctuations of plumes, and the turbulent background, the plume front width and plume spacing have been discussed and compared with the theoretical predictions and morphological observations. For the most part good agreements have been found with the direct observations.

  11. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  12. Experimental and statistical study on fracture boundary of non-irradiated Zircaloy-4 cladding tube under LOCA conditions

    NASA Astrophysics Data System (ADS)

    Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki

    2018-02-01

    For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.

  13. Warm absorbers in X-rays (WAX), a comprehensive high-resolution grating spectral study of a sample of Seyfert galaxies - I. A global view and frequency of occurrence of warm absorbers.

    NASA Astrophysics Data System (ADS)

    Laha, Sibasish; Guainazzi, Matteo; Dewangan, Gulab C.; Chakravorty, Susmita; Kembhavi, Ajit K.

    2014-07-01

    We present results from a homogeneous analysis of the broad-band 0.3-10 keV CCD resolution as well as of the soft X-ray high-resolution grating spectra of a hard X-ray flux-limited sample of 26 Seyfert galaxies observed with XMM-Newton. Our goal is to characterize warm absorbers (WAs) along the line of sight to the active nucleus. We significantly detect WAs in 65 per cent of the sample sources. Our results are consistent with WAs being present in at least half of the Seyfert galaxies in the nearby Universe, in agreement with previous estimates. We find a gap in the distribution of the ionization parameter in the range 0.5 < log ξ < 1.5 which we interpret as a thermally unstable region for WA clouds. This may indicate that the WA flow is probably constituted by a clumpy distribution of discrete clouds rather than a continuous medium. The distribution of the WA column densities for the sources with broad Fe Kα lines are similar to those sources which do not have broadened emission lines. Therefore, the detected broad Fe Kα emission lines are bona fide and not artefacts of ionized absorption in the soft X-rays. The WA parameters show no correlation among themselves, with the exception of the ionization parameter versus column density. The shallow slope of the log ξ versus log vout linear regression (0.12 ± 0.03) is inconsistent with the scaling laws predicted by radiation or magnetohydrodynamic-driven winds. Our results also suggest that WA and ultra fast outflows do not represent extreme manifestation of the same astrophysical system.

  14. Estimating tree bole and log weights from green densities measured with the Bergstrom Xylodensimeter.

    Treesearch

    Dale R. Waddell; Michael B. Lambert; W.Y. Pong

    1984-01-01

    The performance of the Bergstrom xylodensimeter, designed to measure the green density of wood, was investigated and compared with a technique that derived green densities from wood disk samples. In addition, log and bole weights of old-growth Douglas-fir and western hemlock were calculated by various formulas and compared with lifted weights measured with a load cell...

  15. Combined natural gamma ray spectral/litho-density measurements applied to complex lithologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirein, J.A.; Gardner, J.S.; Watson, J.T.

    1982-09-01

    Well log data has long been used to provide lithological descriptions of complex formations. Historically, most of the approaches used have been restrictive because they assumed fixed, known, and distinct lithologies for specified zones. The approach described in this paper attempts to alleviate this restriction by estimating the ''probability of a model'' for the models suggested as most likely by the reservoir geology. Lithological variables are simultaneously estimated from response equations for each model and combined in accordance with the probability of each respective model. The initial application of this approach has been the estimation of calcite, quartz, and dolomitemore » in the presence of clays, feldspars, anhydrite, or salt. Estimations were made by using natural gamma ray spectra, photoelectric effect, bulk density, and neutron porosity information. For each model, response equations and parameter selections are obtained from the thorium vs potassium crossplot and the apparent matrix density vs apparent volumetric photoelectric cross section crossplot. The thorium and potassium response equations are used to estimate the volumes of clay and feldspar. The apparent matrix density and volumetric cross section response equations can then be corrected for the presence of clay and feldspar. A test ensures that the clay correction lies within the limits for the assumed lithology model. Results are presented for varying lithologies. For one test well, 6,000 feet were processed in a single pass, without zoning and without adjusting more than one parameter pick. The program recognized sand, limestone, dolomite, clay, feldspar, anhydrite, and salt without analyst intervention.« less

  16. Headwater streams and forest management: does ecoregional context influence logging effects on benthic communities?

    USGS Publications Warehouse

    Medhurst, R. Bruce; Wipfli, Mark S.; Binckley, Chris; Polivka, Karl; Hessburg, Paul F.; Salter, R. Brion

    2010-01-01

    Effects of forest management on stream communities have been widely documented, but the role that climate plays in the disturbance outcomes is not understood. In order to determine whether the effect of disturbance from forest management on headwater stream communities varies by climate, we evaluated benthic macroinvertebrate communities in 24 headwater streams that differed in forest management (logged-roaded vs. unlogged-unroaded, hereafter logged and unlogged) within two ecological sub-regions (wet versus dry) within the eastern Cascade Range, Washington, USA. In both ecoregions, total macroinvertebrate density was highest at logged sites (P = 0.001) with gathering-collectors and shredders dominating. Total taxonomic richness and diversity did not differ between ecoregions or forest management types. Shredder densities were positively correlated with total deciduous and Sitka alder (Alnus sinuata) riparian cover. Further, differences in shredder density between logged and unlogged sites were greater in the wet ecoregion (logging × ecoregion interaction; P = 0.006) suggesting that differences in post-logging forest succession between ecoregions were responsible for differences in shredder abundance. Headwater stream benthic community structure was influenced by logging and regional differences in climate. Future development of ecoregional classification models at the subbasin scale, and use of functional metrics in addition to structural metrics, may allow for more accurate assessments of anthropogenic disturbances in mountainous regions where mosaics of localized differences in climate are common.

  17. Petrophysical evaluation of subterranean formations

    DOEpatents

    Klein, James D; Schoderbek, David A; Mailloux, Jason M

    2013-05-28

    Methods and systems are provided for evaluating petrophysical properties of subterranean formations and comprehensively evaluating hydrate presence through a combination of computer-implemented log modeling and analysis. Certain embodiments include the steps of running a number of logging tools in a wellbore to obtain a variety of wellbore data and logs, and evaluating and modeling the log data to ascertain various petrophysical properties. Examples of suitable logging techniques that may be used in combination with the present invention include, but are not limited to, sonic logs, electrical resistivity logs, gamma ray logs, neutron porosity logs, density logs, NRM logs, or any combination or subset thereof.

  18. Endogenous Sex Steroid Hormones, Lipid Subfractions, and Ectopic Adiposity in Asian Indians.

    PubMed

    Kim, Catherine; Kong, Shengchun; Krauss, Ronald M; Stanczyk, Frank Z; Reddy, Srinivasa T; Needham, Belinda L; Kanaya, Alka M

    2015-12-01

    Estradiol, testosterone (T), and sex hormone binding globulin (SHBG) levels are associated with lipid subfractions in men and women. Our objective was to determine if associations are independent from adipose tissue area among Asian Indians. We used data from 42 women and 57 Asian Indian men who did not use exogenous steroids or lipid-lowering medications. Lipoprotein subfractions including low-density lipoprotein cholesterol (LDL), very low-density lipoprotein cholesterol (VLDL), and intermediate density lipoprotein (IDL) were assessed by ion mobility spectrometry. Intra-abdominal adiposity was assessed by computed tomography. Multivariable regression models estimated the association between sex hormones with lipoprotein subfractions before and after adjustment for adiposity. Among women, lower logSHBG levels were associated with smaller logLDL particle size and higher logtriglycerides, logVLDL, and logIDL, although these associations were attenuated with adjustment for visceral adiposity in particular. Among women, lower logSHBG levels was significantly associated with lower logmedium LDL and logsmall LDL concentrations even after consideration of visceral and hepatic adiposity and insulin resistance as represented by the homeostasis model assessment of insulin resistance (HOMA-IR). Among men, lower logSHBG was also associated with smaller logLDL peak diameter size and higher logtriglycerides and logVLDL, even after adjustment for HOMA-IR and adiposity. Relationships between sex steroids and lipid subfractions were not significant among women. Among men, higher total testosterone was associated with higher logHDL and logLDL particle size, and lower logtriglycerides and logVLDL, but these associations were partially attenuated with adjustment for adiposity and HOMA-IR. Among Asian Indians, SHBG is associated with more favorable lipid subfraction concentrations, independent of hepatic and visceral fat.

  19. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices

    PubMed Central

    Ye, Xin; Pendyala, Ram M.; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152

  20. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.

    PubMed

    Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie

    2017-01-01

    A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.

  1. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.

  2. The Sabah Biodiversity Experiment: a long-term test of the role of tree diversity in restoring tropical forest structure and functioning

    PubMed Central

    Hector, Andy; Philipson, Christopher; Saner, Philippe; Chamagne, Juliette; Dzulkifli, Dzaeman; O'Brien, Michael; Snaddon, Jake L.; Ulok, Philip; Weilenmann, Maja; Reynolds, Glen; Godfray, H. Charles J.

    2011-01-01

    Relatively, little is known about the relationship between biodiversity and ecosystem functioning in forests, especially in the tropics. We describe the Sabah Biodiversity Experiment: a large-scale, long-term field study on the island of Borneo. The project aims at understanding the relationship between tree species diversity and the functioning of lowland dipterocarp rainforest during restoration following selective logging. The experiment is planned to run for several decades (from seed to adult tree), so here we focus on introducing the project and its experimental design and on assessing initial conditions and the potential for restoration of the structure and functioning of the study system, the Malua Forest Reserve. We estimate residual impacts 22 years after selective logging by comparison with an appropriate neighbouring area of primary forest in Danum Valley of similar conditions. There was no difference in the alpha or beta species diversity of transect plots in the two forest types, probably owing to the selective nature of the logging and potential effects of competitive release. However, despite equal total stem density, forest structure differed as expected with a deficit of large trees and a surfeit of saplings in selectively logged areas. These impacts on structure have the potential to influence ecosystem functioning. In particular, above-ground biomass and carbon pools in selectively logged areas were only 60 per cent of those in the primary forest even after 22 years of recovery. Our results establish the initial conditions for the Sabah Biodiversity Experiment and confirm the potential to accelerate restoration by using enrichment planting of dipterocarps to overcome recruitment limitation. What role dipterocarp diversity plays in restoration only will become clear with long-term results. PMID:22006970

  3. The Sabah Biodiversity Experiment: a long-term test of the role of tree diversity in restoring tropical forest structure and functioning.

    PubMed

    Hector, Andy; Philipson, Christopher; Saner, Philippe; Chamagne, Juliette; Dzulkifli, Dzaeman; O'Brien, Michael; Snaddon, Jake L; Ulok, Philip; Weilenmann, Maja; Reynolds, Glen; Godfray, H Charles J

    2011-11-27

    Relatively, little is known about the relationship between biodiversity and ecosystem functioning in forests, especially in the tropics. We describe the Sabah Biodiversity Experiment: a large-scale, long-term field study on the island of Borneo. The project aims at understanding the relationship between tree species diversity and the functioning of lowland dipterocarp rainforest during restoration following selective logging. The experiment is planned to run for several decades (from seed to adult tree), so here we focus on introducing the project and its experimental design and on assessing initial conditions and the potential for restoration of the structure and functioning of the study system, the Malua Forest Reserve. We estimate residual impacts 22 years after selective logging by comparison with an appropriate neighbouring area of primary forest in Danum Valley of similar conditions. There was no difference in the alpha or beta species diversity of transect plots in the two forest types, probably owing to the selective nature of the logging and potential effects of competitive release. However, despite equal total stem density, forest structure differed as expected with a deficit of large trees and a surfeit of saplings in selectively logged areas. These impacts on structure have the potential to influence ecosystem functioning. In particular, above-ground biomass and carbon pools in selectively logged areas were only 60 per cent of those in the primary forest even after 22 years of recovery. Our results establish the initial conditions for the Sabah Biodiversity Experiment and confirm the potential to accelerate restoration by using enrichment planting of dipterocarps to overcome recruitment limitation. What role dipterocarp diversity plays in restoration only will become clear with long-term results.

  4. How log-normal is your country? An analysis of the statistical distribution of the exported volumes of products

    NASA Astrophysics Data System (ADS)

    Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea

    2016-10-01

    We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.

  5. Functional response of ungulate browsers in disturbed eastern hemlock forests

    USGS Publications Warehouse

    DeStefano, Stephen

    2015-01-01

    Ungulate browsing in predator depleted North American landscapes is believed to be causing widespread tree recruitment failures. However, canopy disturbances and variations in ungulate densities are sources of heterogeneity that can buffer ecosystems against herbivory. Relatively little is known about the functional response (the rate of consumption in relation to food availability) of ungulates in eastern temperate forests, and therefore how “top down” control of vegetation may vary with disturbance type, intensity, and timing. This knowledge gap is relevant in the Northeastern United States today with the recent arrival of hemlock woolly adelgid (HWA; Adelges tsugae) that is killing eastern hemlocks (Tsuga canadensis) and initiating salvage logging as a management response. We used an existing experiment in central New England begun in 2005, which simulated severe adelgid infestation and intensive logging of intact hemlock forest, to examine the functional response of combined moose (Alces americanus) and white-tailed deer (Odocoileus virginianus) foraging in two different time periods after disturbance (3 and 7 years). We predicted that browsing impacts would be linear or accelerating (Type I or Type III response) in year 3 when regenerating stem densities were relatively low and decelerating (Type II response) in year 7 when stem densities increased. We sampled and compared woody regeneration and browsing among logged and simulated insect attack treatments and two intact controls (hemlock and hardwood forest) in 2008 and again in 2012. We then used AIC model selection to compare the three major functional response models (Types I, II, and III) of ungulate browsing in relation to forage density. We also examined relative use of the different stand types by comparing pellet group density and remote camera images. In 2008, total and proportional browse consumption increased with stem density, and peaked in logged plots, revealing a Type I response. In 2012, stem densities were greatest in girdled plots, but proportional browse consumption was highest at intermediate stem densities in logged plots, exhibiting a Type III (rather than a Type II) functional response. Our results revealed shifting top–down control by herbivores at different stages of stand recovery after disturbance and in different understory conditions resulting from logging vs. simulated adelgid attack. If forest managers wish to promote tree regeneration in hemlock stands that is more resistant to ungulate browsers, leaving HWA-infested stands unmanaged may be a better option than preemptively logging them.

  6. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  7. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts.

    PubMed

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2014-05-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.

  8. When Smart People Fail: An Analysis of the Transaction Log of an Online Public Access Catalog.

    ERIC Educational Resources Information Center

    Peters, Thomas A.

    1989-01-01

    Describes a low cost study of the transaction logs of an online catalog at an academic library that examined failure rates, usage patterns, and probable causes of patron problems. The implications of the findings for bibliographic instruction and collection development are discussed and the benefits of analyzing transaction logs are identified.…

  9. Camera trap placement and the potential for bias due to trails and other features

    PubMed Central

    Forrester, Tavis D.

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11–33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9–38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design. PMID:29045478

  10. Camera trap placement and the potential for bias due to trails and other features.

    PubMed

    Kolowski, Joseph M; Forrester, Tavis D

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.

  11. Estimation of the incubation period of invasive aspergillosis by survival models in acute myeloid leukemia patients.

    PubMed

    Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe

    2013-02-01

    The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).

  12. Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients

    PubMed

    Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil

    2018-03-27

    Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License

  13. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  14. A regional view of urban sedimentary basins in Northern California based on oil industry compressional-wave velocity and density logs

    USGS Publications Warehouse

    Brocher, T.M.

    2005-01-01

    Compressional-wave (sonic) and density logs from 119 oil test wells provide knowledge of the physical properties and impedance contrasts within urban sedimentary basins in northern California, which is needed to better understand basin amplification. These wire-line logs provide estimates of sonic velocities and densities for primarily Upper Cretaceous to Pliocene clastic rocks between 0.1 - and 5.6-km depth to an average depth of 1.8 km. Regional differences in the sonic velocities and densities in these basins largely 1reflect variations in the lithology, depth of burial, porosity, and grain size of the strata, but not necessarily formation age. For example, Miocene basin filling strata west of the Calaveras Fault exhibit higher sonic velocities and densities than older but finer-grained and/or higher-porosity rocks of the Upper Cretaceous Great Valley Sequence. As another example, hard Eocene sandstones west of the San Andreas Fault have much higher impedances than Eocene strata, mainly higher-porosity sandstones and shales, located to the east of this fault, and approach those expected for Franciscan Complex basement rocks. Basement penetrations define large impedence contrasts at the sediment/basement contact along the margins of several basins, where Quaternary, Pliocene, and even Miocene deposits directly overlie Franciscan or Salinian basement rocks at depths as much as 1.7 km. In contrast, in the deepest, geographic centers of the basins, such logs exhibit only a modest impedance contrast at the sediment/basement contact at depths exceeding 2 km. Prominent (up to 1 km/sec) and thick (up to several hundred meters) velocity and density reversals in the logs refute the common assumption that velocities and densities increase monotonically with depth.

  15. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gigase, Yves

    2007-07-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide asmore » example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)« less

  16. Methodological uncertainty in quantitative prediction of human hepatic clearance from in vitro experimental systems.

    PubMed

    Hallifax, D; Houston, J B

    2009-03-01

    Mechanistic prediction of unbound drug clearance from human hepatic microsomes and hepatocytes correlates with in vivo clearance but is both systematically low (10 - 20 % of in vivo clearance) and highly variable, based on detailed assessments of published studies. Metabolic capacity (Vmax) of commercially available human hepatic microsomes and cryopreserved hepatocytes is log-normally distributed within wide (30 - 150-fold) ranges; Km is also log-normally distributed and effectively independent of Vmax, implying considerable variability in intrinsic clearance. Despite wide overlap, average capacity is 2 - 20-fold (dependent on P450 enzyme) greater in microsomes than hepatocytes, when both are normalised (scaled to whole liver). The in vitro ranges contrast with relatively narrow ranges of clearance among clinical studies. The high in vitro variation probably reflects unresolved phenotypical variability among liver donors and practicalities in processing of human liver into in vitro systems. A significant contribution from the latter is supported by evidence of low reproducibility (several fold) of activity in cryopreserved hepatocytes and microsomes prepared from the same cells, between separate occasions of thawing of cells from the same liver. The large uncertainty which exists in human hepatic in vitro systems appears to dominate the overall uncertainty of in vitro-in vivo extrapolation, including uncertainties within scaling, modelling and drug dependent effects. As such, any notion of quantitative prediction of clearance appears severely challenged.

  17. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  18. Cylinders out of a top hat: counts-in-cells for projected densities

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  19. Empirical prediction intervals improve energy forecasting

    PubMed Central

    Kaack, Lynn H.; Apt, Jay; Morgan, M. Granger; McSharry, Patrick

    2017-01-01

    Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)’s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks. PMID:28760997

  20. Statistical characteristics of mechanical heart valve cavitation in accelerated testing.

    PubMed

    Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M

    2004-07-01

    Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.

  1. Host range of the emerald ash borer (Agrilus planipennis Fairmaire) (Coleoptera: Buprestidae) in North America: results of multiple-choice field experiments.

    PubMed

    Anulewicz, Andrea C; McCullough, Deborah G; Cappaert, David L; Poland, Therese M

    2008-02-01

    Emerald ash borer (Agrilus planipennis Fairmaire) (Coleoptera: Buprestidae), an invasive phloem-feeding pest, was identified as the cause of widespread ash (Fraxinus) mortality in southeast Michigan and Windsor, Ontario, Canada, in 2002. A. planipennis reportedly colonizes other genera in its native range in Asia, including Ulmus L., Juglans L., and Pterocarya Kunth. Attacks on nonash species have not been observed in North America to date, but there is concern that other genera could be colonized. From 2003 to 2005, we assessed adult A. planipennis landing rates, oviposition, and larval development on North American ash species and congeners of its reported hosts in Asia in multiple-choice field studies conducted at several southeast Michigan sites. Nonash species evaluated included American elm (U. americana L.), hackberry (Celtis occidentalis L.), black walnut (J. nigra L.), shagbark hickory [Carya ovata (Mill.) K.Koch], and Japanese tree lilac (Syringa reticulata Bl.). In studies with freshly cut logs, adult beetles occasionally landed on nonash logs but generally laid fewer eggs than on ash logs. Larvae fed and developed normally on ash logs, which were often heavily infested. No larvae were able to survive, grow, or develop on any nonash logs, although failed first-instar galleries occurred on some walnut logs. High densities of larvae developed on live green ash and white ash nursery trees, but there was no evidence of larval survival or development on Japanese tree lilac and black walnut trees in the same plantation. We felled, debarked, and intensively examined >28 m2 of phloem area on nine American elm trees growing in contact with or adjacent to heavily infested ash trees. We found no sign of A. planipennis feeding on any elm.

  2. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    NASA Technical Reports Server (NTRS)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  3. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  4. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites: Observed cloud variability at ARM sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-17

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less

  5. Trajectories of saltating sand particles behind a porous fence

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Lee, Sang Joon; Chen, Ting-Guo

    2015-01-01

    Trajectories of aeolian sand particles behind a porous wind fence embedded in a simulated atmospheric boundary layer were visualized experimentally, to investigate the shelter effect of the fence on sand saltation. Two sand samples, one collected from a beach (d = 250 μm) and the other from a desert (d = 100 μm), were tested in comparison with the previous studies of a 'no-fence' case. A wind fence (ε = 38.5%) was installed on a flat sand bed filled with each sand sample. A high-speed photography technique and the particle tracking velocimetry (PTV) method were employed to reconstruct the trajectories of particles saltating behind the fence. The collision processes of these sand particles were analyzed, momentum and kinetic energy transfer between saltating particles and ground surface were also investigated. In the wake region, probability density distributions of the impact velocities agree well with the pattern of no-fence case, and can be explained by a log-normal law. The horizontal component of impact velocity for the beach sand is decreased by about 54%, and about 76% for the desert sand. Vertical restitution coefficients of bouncing particles are smaller than 1.0 due to the presence of the wind fence. The saltating particles lose a large proportion of their energy during the collision process. These results illustrate that the porous wind fence effectively abates the further evolution of saltating sand particles.

  6. 40 CFR 146.87 - Logging, sampling, and testing prior to injection well operation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... owner or operator must submit to the Director a descriptive report prepared by a knowledgeable log... installed; and (ii) A cement bond and variable density log to evaluate cement quality radially, and a...

  7. 40 CFR 146.87 - Logging, sampling, and testing prior to injection well operation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... owner or operator must submit to the Director a descriptive report prepared by a knowledgeable log... installed; and (ii) A cement bond and variable density log to evaluate cement quality radially, and a...

  8. 40 CFR 146.87 - Logging, sampling, and testing prior to injection well operation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... owner or operator must submit to the Director a descriptive report prepared by a knowledgeable log... installed; and (ii) A cement bond and variable density log to evaluate cement quality radially, and a...

  9. 40 CFR 146.87 - Logging, sampling, and testing prior to injection well operation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... owner or operator must submit to the Director a descriptive report prepared by a knowledgeable log... installed; and (ii) A cement bond and variable density log to evaluate cement quality radially, and a...

  10. On the distribution of scaling hydraulic parameters in a spatially anisotropic banana field

    NASA Astrophysics Data System (ADS)

    Regalado, Carlos M.

    2005-06-01

    When modeling soil hydraulic properties at field scale it is desirable to approximate the variability in a given area by means of some scaling transformations which relate spatially variable local hydraulic properties to global reference characteristics. Seventy soil cores were sampled within a drip irrigated banana plantation greenhouse on a 14×5 array of 2.5 m×5 m rectangles at 15 cm depth, to represent the field scale variability of flow related properties. Saturated hydraulic conductivity and water retention characteristics were measured in these 70 soil cores. van Genuchten water retention curves (WRC) with optimized m ( m≠1-1/ n) were fitted to the WR data and a general Mualem-van Genuchten model was used to predict hydraulic conductivity functions for each soil core. A scaling law, of the form ν=ανi*, was fitted to soil hydraulic data, such that the original hydraulic parameters νi were scaled down to a reference curve with parameters νi*. An analytical expression, in terms of Beta functions, for the average suction value, hc, necessary to apply the above scaling method, was obtained. A robust optimization procedure with fast convergence to the global minimum is used to find the optimum hc, such that dispersion is minimized in the scaled data set. Via the Box-Cox transformation P(τ)=(αiτ-1)/τ, Box-Cox normality plots showed that scaling factors for the suction ( αh) and hydraulic conductivity ( αk) were approximately log-normally distributed (i.e. τ=0), as it would be expected for such dynamic properties involving flow. By contrast static soil related properties as αθ were found closely Gaussian, although a power τ=3/4 was best for approaching normality. Application of four different normality tests (Anderson-Darling, Shapiro-Wilk, Kolmogorov-Smirnov and χ2 goodness-of-fit tests) rendered some contradictory results among them, thus suggesting that this widely extended practice is not recommended for providing a suitable probability density function for the scaling parameters, αi. Some indications for the origin of these disagreements, in terms of population size and test constraints, are pointed out. Visual inspection of normal probability plots can also lead to erroneous results. The scaling parameters αθ and αK show a sinusoidal spatial variation coincident with the underlying alignment of banana plants on the field. Such anisotropic distribution is explained in terms of porosity variations due to processes promoting soil degradation as surface desiccation and soil compaction, induced by tillage and localized irrigation of banana plants, and it is quantified by means of cross-correlograms.

  11. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  12. [Characteristics of soil seed banks in logging gaps of forests at different succession stages in Changbai Mountains].

    PubMed

    Zhang, Zhi-Ting; Song, Xin-Zhang; Xiao, Wen-Fa; Gao, Bao-Jia; Guo, Zhong-Ling

    2009-06-01

    An investigation was made on the soil seed banks in the logging gaps of Populus davidiana--Betula platyphylla secondary forest, secondary broad-leaved forest, and broad-leaved Korean pine mixed forest at their different succession stages in Changbai Mountains. Among the test forests, secondary broad-leaved forest had the highest individual density (652 ind x m(-2)) in its soil seed bank. With the succession of forest community, the diversity and uniformity of soil seed bank increased, but the dominance decreased. The seed density of climax species such as Pinus koraiensis, Abies nephrolepis, and Acer mono increased, whereas that of Maackia amurensis and Fraxinus mandshurica decreased. Moreover, the similarity in species composition between soil seed bank and the seedlings within logging gaps became higher. The individual density and similarity between soil seed bank and the seedlings in non-logging gaps were similar to those in logging gaps. All of these indicated that soil seed bank provided rich seed resources for forest recovery and succession, and the influence of soil seed bank on seedlings regeneration increased with the succession.

  13. 40 CFR 146.12 - Construction requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...; and (B) A cement bond, temperature, or density log after the casing is set and cemented. (ii) For... cement bond, temperature, or density log after the casing is set and cemented. (e) At a minimum, the... water. The casing and cement used in the construction of each newly drilled well shall be designed for...

  14. A Trust-Based Secure Routing Scheme Using the Traceback Approach for Energy-Harvesting Wireless Sensor Networks.

    PubMed

    Tang, Jiawei; Liu, Anfeng; Zhang, Jian; Xiong, Neal N; Zeng, Zhiwen; Wang, Tian

    2018-03-01

    The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.

  15. A Trust-Based Secure Routing Scheme Using the Traceback Approach for Energy-Harvesting Wireless Sensor Networks

    PubMed Central

    Tang, Jiawei; Zhang, Jian; Zeng, Zhiwen; Wang, Tian

    2018-01-01

    The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%. PMID:29494561

  16. Analyzing coastal environments by means of functional data analysis

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  17. Pest risk assessment of the importation into the United States of unprocessed Pinus and Abies logs from Mexico

    Treesearch

    B. M. Tkacz; H. H. Burdsall; G. A. DeNitto; A. Eglitis; J. B. Hanson; J. T. Kliejunas; W. E. Wallner; J. G. O`Brien; E. L. Smith

    1998-01-01

    The unmitigated pest risk potential for the importation of Pinus and Abies logs from all states of Mexico into the United States was assessed by estimating the probability and consequences of establishment of representative insects and pathogens of concern. Twenty-two individual pest risk assessments were prepared for Pinus logs, twelve dealing with insects and ten...

  18. Coordination of gaze and speech in communication between children with hearing impairment and normal-hearing peers.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-06-01

    To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partner's face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. The results show an increased probability for listeners with HI to gaze at the speaker's face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.

  19. Assessing cadmium exposure risks of vegetables with plant uptake factor and soil property.

    PubMed

    Yang, Yang; Chang, Andrew C; Wang, Meie; Chen, Weiping; Peng, Chi

    2018-07-01

    Plant uptake factors (PUFs) are of great importance in human cadmium (Cd) exposure risk assessment while it has been often treated in a generic way. We collected 1077 pairs of vegetable-soil samples from production fields to characterize Cd PUFs and demonstrated their utility in assessing Cd exposure risks to consumers of locally grown vegetables. The Cd PUFs varied with plant species and pH and organic matter content of soils. Once normalized PUFs against soil parameters, the PUFs distributions were log-normal in nature. In this manner, the PUFs were represented by definable probability distributions instead of a deterministic figure. The Cd exposure risks were then assessed using the normalized PUF based on the Monte Carlo simulation algorithm. Factors affecting the extent of Cd exposures were isolated through sensitivity analyses. Normalized PUF would illustrate the outcomes for uncontaminated and slightly contaminated soils. Among the vegetables, lettuce was potentially hazardous for residents due to its high Cd accumulation but low Zn concentration. To protect 95% of the lettuce production from causing excessive Cd exposure risks, pH of soils needed to be 5.9 and above. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    NASA Astrophysics Data System (ADS)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  1. A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt

    NASA Astrophysics Data System (ADS)

    Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.

    2014-12-01

    The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.

  2. Gluten-containing grains skew gluten assessment in oats due to sample grind non-homogeneity.

    PubMed

    Fritz, Ronald D; Chen, Yumin; Contreras, Veronica

    2017-02-01

    Oats are easily contaminated with gluten-rich kernels of wheat, rye and barley. These contaminants are like gluten 'pills', shown here to skew gluten analysis results. Using R-Biopharm R5 ELISA, we quantified gluten in gluten-free oatmeal servings from an in-market survey. For samples with a 5-20ppm reading on a first test, replicate analyses provided results ranging <5ppm to >160ppm. This suggests sample grinding may inadequately disperse gluten to allow a single accurate gluten assessment. To ascertain this, and characterize the distribution of 0.25-g gluten test results for kernel contaminated oats, twelve 50g samples of pure oats, each spiked with a wheat kernel, showed that 0.25g test results followed log-normal-like distributions. With this, we estimate probabilities of mis-assessment for a 'single measure/sample' relative to the <20ppm regulatory threshold, and derive an equation relating the probability of mis-assessment to sample average gluten content. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Statistical analysis of PM₁₀ concentrations at different locations in Malaysia.

    PubMed

    Sansuddin, Nurulilyana; Ramli, Nor Azam; Yahaya, Ahmad Shukri; Yusof, Noor Faizah Fitri Md; Ghazali, Nurul Adyani; Madhoun, Wesam Ahmed Al

    2011-09-01

    Malaysia has experienced several haze events since the 1980s as a consequence of the transboundary movement of air pollutants emitted from forest fires and open burning activities. Hazy episodes can result from local activities and be categorized as "localized haze". General probability distributions (i.e., gamma and log-normal) were chosen to analyze the PM(10) concentrations data at two different types of locations in Malaysia: industrial (Johor Bahru and Nilai) and residential (Kota Kinabalu and Kuantan). These areas were chosen based on their frequently high PM(10) concentration readings. The best models representing the areas were chosen based on their performance indicator values. The best distributions provided the probability of exceedances and the return period between the actual and predicted concentrations based on the threshold limit given by the Malaysian Ambient Air Quality Guidelines (24-h average of 150 μg/m(3)) for PM(10) concentrations. The short-term prediction for PM(10) exceedances in 14 days was obtained using the autoregressive model.

  4. Assessment of the hygienic performances of hamburger patty production processes.

    PubMed

    Gill, C O; Rahn, K; Sloan, K; McMullen, L M

    1997-05-20

    The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.

  5. A method to describe inelastic gamma field distribution in neutron gamma density logging.

    PubMed

    Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang

    2017-11-01

    Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Measuring Resistance to Change at the Within-Session Level

    ERIC Educational Resources Information Center

    Tonneau, Francois; Rios, Americo; Cabrera, Felipe

    2006-01-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases…

  7. The Mouse Cortical Connectome, Characterized by an Ultra-Dense Cortical Graph, Maintains Specificity by Distinct Connectivity Profiles.

    PubMed

    Gămănuţ, Răzvan; Kennedy, Henry; Toroczkai, Zoltán; Ercsey-Ravasz, Mária; Van Essen, David C; Knoblauch, Kenneth; Burkhalter, Andreas

    2018-02-07

    The inter-areal wiring pattern of the mouse cerebral cortex was analyzed in relation to a refined parcellation of cortical areas. Twenty-seven retrograde tracer injections were made in 19 areas of a 47-area parcellation of the mouse neocortex. Flat mounts of the cortex and multiple histological markers enabled detailed counts of labeled neurons in individual areas. The observed log-normal distribution of connection weights to each cortical area spans 5 orders of magnitude and reveals a distinct connectivity profile for each area, analogous to that observed in macaques. The cortical network has a density of 97%, considerably higher than the 66% density reported in macaques. A weighted graph analysis reveals a similar global efficiency but weaker spatial clustering compared with that reported in macaques. The consistency, precision of the connectivity profile, density, and weighted graph analysis of the present data differ significantly from those obtained in earlier studies in the mouse. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Seismic velocity deviation log: An effective method for evaluating spatial distribution of reservoir pore types

    NASA Astrophysics Data System (ADS)

    Shirmohamadi, Mohamad; Kadkhodaie, Ali; Rahimpour-Bonab, Hossain; Faraji, Mohammad Ali

    2017-04-01

    Velocity deviation log (VDL) is a synthetic log used to determine pore types in reservoir rocks based on a combination of the sonic log with neutron-density logs. The current study proposes a two step approach to create a map of porosity and pore types by integrating the results of petrographic studies, well logs and seismic data. In the first step, velocity deviation log was created from the combination of the sonic log with the neutron-density log. The results allowed identifying negative, zero and positive deviations based on the created synthetic velocity log. Negative velocity deviations (below - 500 m/s) indicate connected or interconnected pores and fractures, while positive deviations (above + 500 m/s) are related to isolated pores. Zero deviations in the range of [- 500 m/s, + 500 m/s] are in good agreement with intercrystalline and microporosities. The results of petrographic studies were used to validate the main pore type derived from velocity deviation log. In the next step, velocity deviation log was estimated from seismic data by using a probabilistic neural network model. For this purpose, the inverted acoustic impedance along with the amplitude based seismic attributes were formulated to VDL. The methodology is illustrated by performing a case study from the Hendijan oilfield, northwestern Persian Gulf. The results of this study show that integration of petrographic, well logs and seismic attributes is an instrumental way for understanding the spatial distribution of main reservoir pore types.

  9. Monitoring the Groningen gas field by seismic noise interferometry

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Paulssen, Hanneke

    2017-04-01

    The Groningen gas field in the Netherlands is the world's 7th largest onshore gas field and has been producing from 1963. Since 2013, the year with the highest level of induced seismicity, the reservoir has been monitored by two geophone strings at reservoir level at about 3 km depth. For borehole SDM, 10 geophones with a natural frequency of 15-Hz are positioned from the top to bottom of the reservoir with a geophone spacing of 30 m. We used seismic interferometry to determine, as accurately as possible, the inter-geophone P- and S-wave velocities from ambient noise. We used 1-bit normalization and spectral whitening, together with a bandpass filter from 3 to 400 Hz. After that, for each station pair, the normalized cross-correlation was calculated for 6 seconds segments with 2/3 overlap. These segmented cross-correlations were stacked for every 1 hour, 24(hours)*33(days) segments were obtained for each station pair. The cross-correlations show both day-and-night and weekly variations reflecting fluctuations in cultural noise. The apparent P-wave travel time for each geophone pair is measured from the maximum of the vertical component cross-correlation for each of the hourly stacks. Because the distribution of these (24*33) picked travel times is not Gaussian but skewed, we used Kernel density estimations to obtain probability density functions of the travel times. The maximum likelihood travel times of all the geophone pairs was subsequently used to determine inter-geophone P-wave velocities. A good agreement was found between our estimated P velocity structure and well logging data, with difference less than 5%. The S-velocity structure was obtained from the east-component cross-correlations. They show both the direct P- and S-wave arrivals and, because of the interference, the inferred S-velocity structure is less accurate. From the 9(3x3)-component cross-correlations for all the geophone pairs, not only the direct P and S waves can be identified, but also reflected waves within the reservoir for some of the cross-correlations. It is concluded that noise interferometry can be used to determine the seismic velocity structure from deep borehole data.

  10. Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications

    DTIC Science & Technology

    2006-03-01

    know the probability of that from Lemma 2. Using the union bound, we know that for any query q, the probability that i-am-feeling-lucky search algorithm...and each point in a d-dimensional space, a naive k-NN search needs to do a linear scan of T for every single query q, and thus the computational time...algorithm based on partition trees with priority search , and give an expected query time O((1/)d log n). But the constant in the O((1/)d log n

  11. Passive microrheology of normal and cancer cells after ML7 treatment by atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyapunova, Elena, E-mail: lyapunova@icmm.ru; Ural Federal University, Kuibyishev Str. 48, Ekaterinburg, 620000; Nikituk, Alexander, E-mail: nas@icmm.ru

    Mechanical properties of living cancer and normal thyroidal cells were investigated by atomic force microscopy (AFM). Cell mechanics was compared before and after treatment with ML7, which is known to reduce myosin activity and induce softening of cell structures. We recorded force curves with extended dwell time of 6 seconds in contact at maximum forces from 500 pN to 1 nN. Data were analyzed within different frameworks: Hertz fit was applied in order to evaluate differences in Young’s moduli among cell types and conditions, while the fluctuations of the cantilever in contact with cells were analyzed with both conventional algorithmsmore » (probability density function and power spectral density) and multifractal detrended fluctuation analysis (MF-DFA). We found that cancer cells were softer than normal cells and ML7 had a substantial softening effect on normal cells, but only a marginal one on cancer cells. Moreover, we observed that all recorded signals for normal and cancer cells were monofractal with small differences between their scaling parameters. Finally, the applicability of wavelet-based methods of data analysis for the discrimination of different cell types is discussed.« less

  12. Empirical analysis on the runners' velocity distribution in city marathons

    NASA Astrophysics Data System (ADS)

    Lin, Zhenquan; Meng, Fan

    2018-01-01

    In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.

  13. Effects of host plant and larval density on intraspecific competition in larvae of the emerald ash borer (Coleoptera: Buprestidae).

    PubMed

    Duan, Jian J; Larson, Kristi; Watt, Tim; Gould, Juli; Lelito, Jonathan P

    2013-12-01

    Competition for food, mates, and space among different individuals of the same insect species can affect density-dependent regulation of insect abundance or population dynamics. The emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), is a serious invasive pest of North American ash (Fraxinus spp.) trees, with its larvae feeding in serpentine galleries between the interface of sapwood and phloem tissues of ash trees. Using artificial infestation of freshly cut logs of green ash (Fraxinus pennsylvanica Marshall) and tropical ash (Fraxinus uhdei [Wenzig] Lingelsh) with a series of egg densities, we evaluated the mechanism and outcome of intraspecific competition in larvae of A. planipennis in relation to larval density and host plant species. Results from our study showed that as the egg densities on each log (1.5-6.5 cm in diameter and 22-25 cm in length) increased from 200 to 1,600 eggs per square meter of surface area, larval survivorship declined from ≍68 to 10% for the green ash logs, and 86 to 55% for tropical ash logs. Accordingly, larval mortality resulting from cannibalism, starvation, or both, significantly increased as egg density increased, and the biomass of surviving larvae significantly decreased on both ash species. When larval density was adjusted to the same level, however, larval mortality from intraspecific competition was significantly higher and mean biomasses of surviving larvae was significantly lower in green ash than in tropical ash. The role of intraspecific competition of A. planipennis larvae in density-dependent regulation of its natural population dynamics is discussed.

  14. Far-infrared properties of cluster galaxies

    NASA Technical Reports Server (NTRS)

    Bicay, M. D.; Giovanelli, R.

    1987-01-01

    Far-infrared properties are derived for a sample of over 200 galaxies in seven clusters: A262, Cancer, A1367, A1656 (Coma), A2147, A2151 (Hercules), and Pegasus. The IR-selected sample consists almost entirely of IR normal galaxies, with Log of L(FIR) = 9.79 solar luminosities, Log of L(FIR)/L(B) = 0,79, and Log of S(100 microns)/S(60 microns) = 0.42. None of the sample galaxies has Log of L(FIR) greater than 11.0 solar luminosities, and only one has a FIR-to-blue luminosity ratio greater than 10. No significant differences are found in the FIR properties of HI-deficient and HI-normal cluster galaxies.

  15. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit

    2009-01-01

    Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.

  16. Compacting biomass waste materials for use as fuel

    NASA Astrophysics Data System (ADS)

    Zhang, Ou

    Every year, biomass waste materials are produced in large quantity. The combustibles in biomass waste materials make up over 70% of the total waste. How to utilize these waste materials is important to the nation and the world. The purpose of this study is to test optimum processes and conditions of compacting a number of biomass waste materials to form a densified solid fuel for use at coal-fired power plants or ordinary commercial furnaces. Successful use of such fuel as a substitute for or in cofiring with coal not only solves a solid waste disposal problem but also reduces the release of some gases from burning coal which cause health problem, acid rain and global warming. The unique punch-and-die process developed at the Capsule Pipeline Research Center, University of Missouri-Columbia was used for compacting the solid wastes, including waste paper, plastics (both film and hard products), textiles, leaves, and wood. The compaction was performed to produce strong compacts (biomass logs) under room temperature without binder and without preheating. The compaction conditions important to the commercial production of densified biomass fuel logs, including compaction pressure, pressure holding time, back pressure, moisture content, particle size, binder effects, and mold conditions were studied and optimized. The properties of the biomass logs were evaluated in terms of physical, mechanical, and combustion characteristics. It was found that the compaction pressure and the initial moisture content of the biomass material play critical roles in producing high-quality biomass logs. Under optimized compaction conditions, biomass waste materials can be compacted into high-quality logs with a density of 0.8 to 1.2 g/cm3. The logs made from the combustible wastes have a heating value in the range 6,000 to 8,000 Btu/lb which is only slightly (10 to 30%) less than that of subbituminous coal. To evaluate the feasibility of cofiring biomass logs with coal, burn tests were conducted in a stoke boiler. A separate burning test was also carried out by burning biomass logs alone in an outdoor hot-water furnace for heating a building. Based on a previous coal compaction study, the process of biomass compaction was studied numerically by use of a non-linear finite element code. A constitutive model with sufficient generality was adapted for biomass material to deal with pore contraction during compaction. A contact node algorithm was applied to implement the effect of mold wall friction into the finite element program. Numerical analyses were made to investigate the pressure distribution in a die normal to the axis of compaction, and to investigate the density distribution in a biomass log after compaction. The results of the analyses gave generally good agreement with theoretical analysis of coal log compaction, although assumptions had to be made about the variation in the elastic modulus of the material and the Poisson's ratio during the compaction cycle.

  17. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  18. Effects of Methamphetamine on Vigilance and Tracking during Extended Wakefulness.

    DTIC Science & Technology

    1993-09-01

    the log likelihood ratio (log(p); Green & Swets, 1966; Macmillan & Creelman , 1990), was also derived from hit and false-alarm probabilities...vigilance task. Canadian Journal of Psychology, 19, 104-110. Macmillan, N.E., & Creelman , C.D. (1990). Response bias: Characteristics of detection

  19. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  20. Probability density functions for CP-violating rephasing invariants

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Giasson, Nicolas; Marleau, Luc

    2018-05-01

    The implications of the anarchy principle on CP violation in the lepton sector are investigated. A systematic method is introduced to compute the probability density functions for the CP-violating rephasing invariants of the PMNS matrix from the Haar measure relevant to the anarchy principle. Contrary to the CKM matrix which is hierarchical, it is shown that the Haar measure, and hence the anarchy principle, are very likely to lead to the observed PMNS matrix. Predictions on the CP-violating Dirac rephasing invariant |jD | and Majorana rephasing invariant |j1 | are also obtained. They correspond to 〈 |jD | 〉 Haar = π / 105 ≈ 0.030 and 〈 |j1 | 〉 Haar = 1 / (6 π) ≈ 0.053 respectively, in agreement with the experimental hint from T2K of | jDexp | ≈ 0.032 ± 0.005 (or ≈ 0.033 ± 0.003) for the normal (or inverted) hierarchy.

  1. Estimating load weights with Huber's Cubic Volume formula: a field trial.

    Treesearch

    Dale R. Waddell

    1989-01-01

    Log weights were estimated from the product of Huber's cubic volume formula and green density. Tags showing estimated log weights were attached to logs in the field, and the weights were tallied into a single load weight as logs were assembled for aerial yarding. Accuracy of the estimated load weights was evaluated by comparing the predicted with the actual load...

  2. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  3. Integrated seismic stochastic inversion and multi-attributes to delineate reservoir distribution: Case study MZ fields, Central Sumatra Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.

    2017-07-01

    This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.

  4. The bias of the log power spectrum for discrete surveys

    NASA Astrophysics Data System (ADS)

    Repp, Andrew; Szapudi, István

    2018-03-01

    A primary goal of galaxy surveys is to tighten constraints on cosmological parameters, and the power spectrum P(k) is the standard means of doing so. However, at translinear scales P(k) is blind to much of these surveys' information - information which the log density power spectrum recovers. For discrete fields (such as the galaxy density), A* denotes the statistic analogous to the log density: A* is a `sufficient statistic' in that its power spectrum (and mean) capture virtually all of a discrete survey's information. However, the power spectrum of A* is biased with respect to the corresponding log spectrum for continuous fields, and to use P_{A^*}(k) to constrain the values of cosmological parameters, we require some means of predicting this bias. Here, we present a prescription for doing so; for Euclid-like surveys (with cubical cells 16h-1 Mpc across) our bias prescription's error is less than 3 per cent. This prediction will facilitate optimal utilization of the information in future galaxy surveys.

  5. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  6. Examining dental expenditure and dental insurance accounting for probability of incurring expenses.

    PubMed

    Teusner, Dana; Smith, Valerie; Gnanamanickam, Emmanuel; Brennan, David

    2017-04-01

    There are few studies of dental service expenditure in Australia. Although dental insurance status is strongly associated with a higher probability of dental visiting, some studies indicate that there is little variation in expenditure by insurance status among those who attend for care. Our objective was to assess the overall impact of insurance on expenditures by modelling the association between insurance and expenditure accounting for variation in the probability of incurring expenses, that is dental visiting. A sample of 3000 adults (aged 30-61 years) was randomly selected from the Australian electoral roll. Dental service expenditures were collected prospectively over 2 years by client-held log books. Questionnaires collecting participant characteristics were administered at baseline, 12 months and 24 months. Unadjusted and adjusted ratios of expenditure were estimated using marginalized two-part log-skew-normal models. Such models accommodate highly skewed data and estimate effects of covariates on the overall marginal mean while accounting for the probability of incurring expenses. Baseline response was 39%; of these, 40% (n = 438) were retained over the 2-year period. Only participants providing complete data were included in the analysis (n = 378). Of these, 68.5% were insured, and 70.9% accessed dental services of which nearly all (97.7%) incurred individual dental expenses. The mean dental service expenditure for the total sample (those who did and did not attend) for dental care was AUS$788. Model-adjusted ratios of mean expenditures were higher for the insured (1.61; 95% CI 1.18, 2.20), females (1.38; 95% CI 1.06, 1.81), major city residents (1.43; 95% CI 1.10, 1.84) and those who brushed their teeth twice or more a day (1.50; 95% CI 1.15, 1.96) than their respective counterparts. Accounting for the probability of incurring dental expenses, and other explanatory factors, insured working-aged adults had (on average) approximately 60% higher individual dental service expenditures than uninsured adults. The analytical approach adopted in this study is useful for estimating effects on dental expenditure when a variable is associated with both the probability of visiting for care, and with the types of services received. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Maintaining ecosystem resilience: functional responses of tree cavity nesters to logging in temperate forests of the Americas.

    PubMed

    Ibarra, José Tomás; Martin, Michaela; Cockle, Kristina L; Martin, Kathy

    2017-06-30

    Logging often reduces taxonomic diversity in forest communities, but little is known about how this biodiversity loss affects the resilience of ecosystem functions. We examined how partial logging and clearcutting of temperate forests influenced functional diversity of birds that nest in tree cavities. We used point-counts in a before-after-control-impact design to examine the effects of logging on the value, range, and density of functional traits in bird communities in Canada (21 species) and Chile (16 species). Clearcutting, but not partial logging, reduced diversity in both systems. The effect was much more pronounced in Chile, where logging operations removed critical nesting resources (large decaying trees), than in Canada, where decaying aspen Populus tremuloides were retained on site. In Chile, logging was accompanied by declines in species richness, functional richness (amount of functional niche occupied by species), community-weighted body mass (average mass, weighted by species densities), and functional divergence (degree of maximization of divergence in occupied functional niche). In Canada, clearcutting did not affect species richness but nevertheless reduced functional richness and community-weighted body mass. Although some cavity-nesting birds can persist under intensive logging operations, their ecosystem functions may be severely compromised unless future nest trees can be retained on logged sites.

  8. Must Star-forming Galaxies Rapidly Get Denser before They Quench?

    NASA Astrophysics Data System (ADS)

    Abramson, L. E.; Morishita, T.

    2018-05-01

    Using the deepest data yet obtained, we find no evidence preferring compaction-triggered quenching—where rapid increases in galaxy density truncate star formation—over a null hypothesis in which galaxies age at constant surface density ({{{Σ }}}e\\equiv {M}* /2π {r}e2). Results from two fully empirical analyses and one quenching-free model calculation support this claim at all z ≤ 3: (1) qualitatively, galaxies’ mean U–V colors at 6.5 ≲ {log}{{{Σ }}}e/{\\text{}}{M}ȯ {kpc}}-2≲ 10 have reddened at rates/times correlated with {{{Σ }}}e, implying that there is no density threshold at which galaxies turn red but that {{{Σ }}}e sets the pace of maturation; (2) quantitatively, the abundance of {log}{M}* /{\\text{}}{M}ȯ ≥slant 9.4 red galaxies never exceeds that of the total population a quenching time earlier at any {{{Σ }}}e, implying that galaxies need not transit from low to high densities before quenching; (3) applying d{log}{r}e/{dt}=1/2 d{log}{M}* /{dt} to a suite of lognormal star formation histories reproduces the evolution of the size–mass relation at {log}{M}* /{\\text{}}{M}ȯ ≥slant 10. All results are consistent with evolutionary rates being set ab initio by global densities, with denser objects evolving faster than less-dense ones toward a terminal quiescence induced by gas depletion or other ∼Hubble-timescale phenomena. Unless stellar ages demand otherwise, observed {{{Σ }}}e thresholds need not bear any physical relation to quenching beyond this intrinsic density–formation epoch correlation, adding to Lilly & Carollo’s arguments to that effect.

  9. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  10. Using nonlinear quantile regression to estimate the self-thinning boundary curve

    Treesearch

    Quang V. Cao; Thomas J. Dean

    2015-01-01

    The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...

  11. Modelling interactions of toxicants and density dependence in wildlife populations

    USGS Publications Warehouse

    Schipper, Aafke M.; Hendriks, Harrie W.M.; Kauffman, Matthew J.; Hendriks, A. Jan; Huijbregts, Mark A.J.

    2013-01-01

    1. A major challenge in the conservation of threatened and endangered species is to predict population decline and design appropriate recovery measures. However, anthropogenic impacts on wildlife populations are notoriously difficult to predict due to potentially nonlinear responses and interactions with natural ecological processes like density dependence. 2. Here, we incorporated both density dependence and anthropogenic stressors in a stage-based matrix population model and parameterized it for a density-dependent population of peregrine falcons Falco peregrinus exposed to two anthropogenic toxicants [dichlorodiphenyldichloroethylene (DDE) and polybrominated diphenyl ethers (PBDEs)]. Log-logistic exposure–response relationships were used to translate toxicant concentrations in peregrine falcon eggs to effects on fecundity. Density dependence was modelled as the probability of a nonbreeding bird acquiring a breeding territory as a function of the current number of breeders. 3. The equilibrium size of the population, as represented by the number of breeders, responded nonlinearly to increasing toxicant concentrations, showing a gradual decrease followed by a relatively steep decline. Initially, toxicant-induced reductions in population size were mitigated by an alleviation of the density limitation, that is, an increasing probability of territory acquisition. Once population density was no longer limiting, the toxicant impacts were no longer buffered by an increasing proportion of nonbreeders shifting to the breeding stage, resulting in a strong decrease in the equilibrium number of breeders. 4. Median critical exposure concentrations, that is, median toxicant concentrations in eggs corresponding with an equilibrium population size of zero, were 33 and 46 μg g−1 fresh weight for DDE and PBDEs, respectively. 5. Synthesis and applications. Our modelling results showed that particular life stages of a density-limited population may be relatively insensitive to toxicant impacts until a critical threshold is crossed. In our study population, toxicant-induced changes were observed in the equilibrium number of nonbreeding rather than breeding birds, suggesting that monitoring efforts including both life stages are needed to timely detect population declines. Further, by combining quantitative exposure–response relationships with a wildlife demographic model, we provided a method to quantify critical toxicant thresholds for wildlife population persistence.

  12. Period meter for reactors

    DOEpatents

    Rusch, Gordon K.

    1976-01-06

    An improved log N amplifier type nuclear reactor period meter with reduced probability for noise-induced scrams is provided. With the reactor at low power levels a sampling circuit is provided to determine the reactor period by measuring the finite change in the amplitude of the log N amplifier output signal for a predetermined time period, while at high power levels, differentiation of the log N amplifier output signal provides an additional measure of the reactor period.

  13. Use of alternative carrier materials in AOAC Official Method 2008.05, efficacy of liquid sporicides against spores of Bacillus subtilis on a hard, nonporous surface, quantitative three-step method.

    PubMed

    Tomasino, Stephen F; Rastogi, Vipin K; Wallace, Lalena; Smith, Lisa S; Hamilton, Martin A; Pines, Rebecca M

    2010-01-01

    The quantitative Three-Step Method (TSM) for testing the efficacy of liquid sporicides against spores of Bacillus subtilis on a hard, nonporous surface (glass) was adopted as AOAC Official Method 2008.05 in May 2008. The TSM uses 5 x 5 x 1 mm coupons (carriers) upon which spores have been inoculated and which are introduced into liquid sporicidal agent contained in a microcentrifuge tube. Following exposure of inoculated carriers and neutralization, spores are removed from carriers in three fractions (gentle washing, fraction A; sonication, fraction B; and gentle agitation, fraction C). Liquid from each fraction is serially diluted and plated on a recovery medium for spore enumeration. The counts are summed over the three fractions to provide the density (viable spores per carrier), which is log10-transformed to arrive at the log density. The log reduction is calculated by subtracting the mean log density for treated carriers from the mean log density for control carriers. This paper presents a single-laboratory investigation conducted to evaluate the applicability of using two porous carrier materials (ceramic tile and untreated pine wood) and one alternative nonporous material (stainless steel). Glass carriers were included in the study as the reference material. Inoculated carriers were evaluated against three commercially available liquid sporicides (sodium hypochlorite, a combination of peracetic acid and hydrogen peroxide, and glutaraldehyde), each at two levels of presumed efficacy (medium and high) to provide data for assessing the responsiveness of the TSM. Three coupons of each material were evaluated across three replications at each level; three replications of a control were required. Even though all carriers were inoculated with approximately the same number of spores, the observed counts of recovered spores were consistently higher for the nonporous carriers. For control carriers, the mean log densities for the four materials ranged from 6.63 for wood to 7.14 for steel. The pairwise differences between mean log densities, except for glass minus steel, were statistically significant (P < 0.001). The repeatability standard deviations (Sr) for the mean control log density per test were similar for the four materials, ranging from 0.08 for wood to 0.13 for tile. Spore recovery from the carrier materials ranged from approximately 20 to 70%: 20% (pine wood), 40% (ceramic tile), 55% (glass), and 70% (steel). Although the percent spore recovery from pine wood was significantly lower than that from other materials, the performance data indicate that the TSM provides a repeatable and responsive test for determining the efficacy of liquid sporicides on both porous and nonporous materials.

  14. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.

    2007-03-01

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  15. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning.

    PubMed

    Jiang, Runqing; Barnett, Rob B; Chow, James C L; Chen, Jeff Z Y

    2007-03-07

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15 degree increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  16. Statistical methods of fracture characterization using acoustic borehole televiewer log interpretation

    NASA Astrophysics Data System (ADS)

    Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.

    2017-08-01

    Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.

  17. Coarse woody debris in undisturbed and logged forests in the eastern Brazilian Amazon.

    Treesearch

    Michael Keller; Michael Palace; Gregory P. Asner; Rodrigo Jr. Pereira; Jose Natalino M. Silva

    2004-01-01

    Coarse woody debris (CWD) is an important component of the carbon cycle in tropical forests. We measured the volume and density of fallen CWD at two sites, Cauaxi and Tapajós in the Eastern Amazon. At both sites we studied undisturbed forests (UFs) and logged forests 1 year after harvest. Conventional logging (CL) and reduced impact logging (RIL) were...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wainwright, W. W.

    It is suggested that film speed is the most important single factor in reducing dental radiation exposure but has been given little attention. A necessary step in this direction is the application of quantitative film rating systems (of the type used in general radiography) to dental radiography and attention to exposure development factors. To this end, a sensitometric method is presented for measurement of undesired dental radiation overexposure resulting from underdevelopment. The method is based on a universal curve of density-log relative exposure for dental x-ray film. The curve is applicable to any given film and machine setting in intraoralmore » roentgenography. Correct exposure time can be predicted from the curve after exposure of only two dental films and use of a lead-aluminum penetrometer. This dental penetrometer and the universal sensitometric curve make it possible to conduct mass surveys of the amount of radiation overexposure from exposure-development factors in dental offices. An example of a typical determination of the effect of exposuredevelopment factors on radiation dose is given. The densities were measured with a densitometer in the range from 0 to 8. With an exposure of 1/2 sec and development for 1 1/2 min at 64 deg F, the hypothetical dentist obtained a density of 1.95 under aluminum. Full development gave a much greater density, 4.05, which was found by reference to the universal curve to represent a radiation exposure of 3.5 times normal. In other words, the underdevelopment (1 1/2 min at 64 deg F) was compensated by overexposure (1/2 sec), so that films of normal density could be obtained. The dentist was informed of the overexposure, and it was predicted that by dividing his time (1/2 sec) by the radiation exposure (3.5), with full development, he would be able to reduce exposure time from 0.5 to 0.143 sec. On the corrected film with an exposure time of 0.15 sec, the density is 1.72. By changing to full development, the dentist obtained normal density with 1/3 the amount of radiation.« less

  19. Effects of Mechanical Soil Disturbance on Rill Connectivity and Soil Erosion Following Logging on Burned Hillslopes in Central California

    NASA Astrophysics Data System (ADS)

    Olsen, W.; Wagenbrenner, J. W.; Demirtas, I.; Robichaud, P. R.

    2016-12-01

    Soil erosion rates in forests increase after severe fires and may pose a threat to aquatic resources. While research has shown that the harvest of burned trees ("salvage logging") may elevate post-fire erosion, it is less clear how disturbance from logging affects rill erosion and sediment yields. We studied 14 catchments (900-7400 m2 "swales") in the area burned by the 2013 Rim Fire in the California Sierra Nevada, nine of which were burned and logged, and five that were burned and unlogged. We installed silt fences, surveyed mechanical disturbance and rill networks, and measured ground cover following logging that occurred between fall 2014 and fall 2015. The logged swales had 20-162 trees ha-1 removed, and high traffic skid trails covered 8-28% of the swale area while low traffic skid trails covered 0-13% of the area. Feller-buncher tracks were minimal at 0-6% of the swale area. Following logging, wood cover increased, while vegetation cover remained about the same. Rills densities ranged from 0.3-22 m m-2 in logged swales and 2.2-16 m m-2 in unlogged swales. Higher bare soil percentages led to increased rill density in all swales. Rills that initiated in high traffic skid trails averaged 42 m in the swales, while rills from untrafficked burned soil averaged 26 m. The number of rills from high traffic skid trails increased with the amount of skid trail area, and often were diverted by waterbars toward the swale outlets. Sediment yields increased with rill density, and did not appear to respond to the modest increase in wood cover post-logging. Results indicate that rill erosion is a dominant sediment transport mechanism for both burned forests and salvage logged forests at the hillslope to small catchment scale. Mitigating skidding disturbance, appropriate placement of waterbars, and reducing the connectivity of bare soil after logging will be important to reduce rilling and sediment yields related to salvage logging.

  20. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  1. Ubiquitous Log Odds: A Common Representation of Probability and Frequency Distortion in Perception, Action, and Cognition

    PubMed Central

    Zhang, Hang; Maloney, Laurence T.

    2012-01-01

    In decision from experience, the source of probability information affects how probability is distorted in the decision task. Understanding how and why probability is distorted is a key issue in understanding the peculiar character of experience-based decision. We consider how probability information is used not just in decision-making but also in a wide variety of cognitive, perceptual, and motor tasks. Very similar patterns of distortion of probability/frequency information have been found in visual frequency estimation, frequency estimation based on memory, signal detection theory, and in the use of probability information in decision-making under risk and uncertainty. We show that distortion of probability in all cases is well captured as linear transformations of the log odds of frequency and/or probability, a model with a slope parameter, and an intercept parameter. We then consider how task and experience influence these two parameters and the resulting distortion of probability. We review how the probability distortions change in systematic ways with task and report three experiments on frequency distortion where the distortions change systematically in the same task. We found that the slope of frequency distortions decreases with the sample size, which is echoed by findings in decision from experience. We review previous models of the representation of uncertainty and find that none can account for the empirical findings. PMID:22294978

  2. Quantification of Organic richness through wireline logs: a case study of Roseneath shale formation, Cooper basin, Australia

    NASA Astrophysics Data System (ADS)

    Ahmad, Maqsood; Iqbal, Omer; Kadir, Askury Abd

    2017-10-01

    The late Carboniferous-Middle Triassic, intracratonic Cooper basin in northeastern South Australia and southwestern Queensland is Australia's foremost onshore hydrocarbon producing region. The basin compromises Permian carbonaceous shale like lacustrine Roseneath and Murteree shale formation which is acting as source and reservoir rock. The source rock can be distinguished from non-source intervals by lower density, higher transit time, higher gamma ray values, higher porosity and resistivity with increasing organic content. In current dissertation we have attempted to compare the different empirical approaches based on density relation and Δ LogR method through three overlays of sonic/resistivity, neutron/resistivity and density/resistivity to quantify Total organic content (TOC) of Permian lacustrine Roseneath shale formation using open hole wireline log data (DEN, GR, CNL, LLD) of Encounter 1 well. The TOC calculated from fourteen density relations at depth interval between 3174.5-3369 meters is averaged 0.56% while TOC from sonic/resistivity, neutron/resistivity and density/resistivity yielded an average value of 3.84%, 3.68%, 4.40%. The TOC from average of three overlay method is yielded to 3.98%. According to geochemical report in PIRSA the Roseneath shale formation has TOC from 1 - 5 wt %.There is unpromising correlations observed for calculated TOC from fourteen density relations and measured TOC on samples. The TOC from average value of three overlays using Δ LogR method showed good correlation with measured TOC on samples.

  3. Spin-the-bottle Sort and Annealing Sort: Oblivious Sorting via Round-robin Random Comparisons

    PubMed Central

    Goodrich, Michael T.

    2013-01-01

    We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a temperature parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require Ω(n2 log n) expected time in order to succeed, and that in O(n2 log n) time this algorithm succeeds with high probability for any input. We also show there is a specification of Annealing sort that runs in O(n log n) time and succeeds with very high probability. PMID:24550575

  4. In-residence, multiple route exposures to chlorpyrifos and diazinon estimated by indirect method models

    NASA Astrophysics Data System (ADS)

    Moschandreas, D. J.; Kim, Y.; Karuchit, S.; Ari, H.; Lebowitz, M. D.; O'Rourke, M. K.; Gordon, S.; Robertson, G.

    One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure distributions of chlorpyrifos and diazinon in the residential microenvironment using the database generated in Arizona (NHEXAS-AZ). A four-stage probability sampling design was used for sample selection. Exposures to pesticides were estimated using the indirect method of exposure calculation by combining measured concentrations of the two pesticides in multiple media with questionnaire information such as time subjects spent indoors, dietary and non-dietary items they consumed, and areas they touched. Most distributions of in-residence exposure to chlorpyrifos and diazinon were log-normal or nearly log-normal. Exposures to chlorpyrifos and diazinon vary by pesticide and route as well as by various demographic characteristics of the subjects. Comparisons of exposure to pesticides were investigated among subgroups of demographic categories, including gender, age, minority status, education, family income, household dwelling type, year the dwelling was built, pesticide use, and carpeted areas within dwellings. Residents with large carpeted areas within their dwellings have higher exposures to both pesticides for all routes than those in less carpet-covered areas. Depending on the route, several other determinants of exposure to pesticides were identified, but a clear pattern could not be established regarding the exposure differences between several subpopulation groups.

  5. New Concepts in the Evaluation of Biodegradation/Persistence of Chemical Substances Using a Microbial Inoculum

    PubMed Central

    Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han

    2011-01-01

    The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143

  6. VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.

    2017-05-01

    The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).

  7. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  9. Frequency distribution of lithium in leaves of Lycium andersonii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romney, E.M.; Wallace, A.; Kinnear, J.

    1977-01-01

    Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less

  10. Human Target Attainment Probabilities for Delafloxacin against Escherichia coli and Pseudomonas aeruginosa

    PubMed Central

    Hoover, Randall; Marra, Andrea; Duffy, Erin; Cammarata, Sue K

    2017-01-01

    Abstract Background Delafloxacin (DLX) is a broad-spectrum fluoroquinolone antibiotic under FDA review for the treatment of ABSSSI. Previous studies determined DLX bacterial stasis and 1-log10 bacterial reduction free AUC0-24 / MIC (fAUC0-24/MIC) targets for Escherichia coli (EC) and Pseudomonas aeruginosa (PA) in a mouse thigh infection model. The resulting PK/PD targets were used to predict DLX target attainment probabilities (TAP) in humans. Methods Monte Carlo simulations were used to estimate TAP with DLX 300 mg IV, q12hr. Human DLX plasma pharmacokinetics were determined in patients with ABSSSI in a Phase 3 clinical trial. Individual AUC values were analyzed and determined to be log-normally distributed. The parameters of the AUC distribution were used to simulate random values for fAUC24, which then were combined with random MIC values based on 2014–2015 US distributions of skin and soft tissue isolates of EC (n = 108) and PA (n = 40), to calculate PK/PD TAPs. Results DLX fAUC0-24/MIC targets for bacterial stasis and 1-log10 bacterial reduction for EC were 14.5 and 26.2, and for PA were 3.81 and 5.02, respectively. The Monte Carlo simulations for EC predicted TAPs of 98.7% for stasis at an MIC of 0.25 μg/mL, and 99.3% for 1-log10 bacterial reduction at an MIC of 0.12 μg/mL. The simulations for PA predicted TAPs of 97.3% for stasis and 86.5% for 1-log10 bacterial reduction at an MIC of 1 μg/mL. E. coli MIC (ug/mL) Target 0.008 0.015 0.03 0.06 0.12 0.25 0.5 1 Stasis 100 100 100 100 100 97.8 50.4 2.0 1-Log Kill 100 100 100 100 99.3 60.4 5.8 0.0 P. aeruginosa MIC (ug/mL) Target 0.03 0.06 0.12 0.25 0.5 1 2 4 5 Stasis 100 100 100 100 100 97.3 45.9 1.7 0.5 1-Log Kill 100 100 100 100 100 86.5 17.8 0.3 0.1 Conclusion DLX 300 mg IV, q12hr, should achieve fAUC24/MIC ratios that are adequate to treat ABSSSI caused by most contemporary isolates of EC and PA. For EC, isolates with DLX MICs ≤0.25 μg/mL comprised 73% of all isolates. For PA, isolates with DLX MICs ≤1 μg/mL comprised 88% of all isolates. Similar results would be expected for TAP with oral DLX 450 mg, q12hr. Disclosures R. Hoover, Melinta Therapeutics: Consultant, Consulting fee; A. Marra, Melinta Therapeutics: Employee, Salary; E. Duffy, Melinta Therapeutics: Employee, Salary; S. K. Cammarata, Melinta Therapeutics: Employee, Salary

  11. GPER and ERα expression in abnormal endometrial proliferations.

    PubMed

    Tica, Andrei Adrian; Tica, Oana Sorina; Georgescu, Claudia Valentina; Pirici, Daniel; Bogdan, Maria; Ciurea, Tudorel; Mogoantă, Stelian ŞtefăniŢă; Georgescu, Corneliu Cristian; Comănescu, Alexandru Cristian; Bălşeanu, Tudor Adrian; Ciurea, Raluca Niculina; Osiac, Eugen; Buga, Ana Maria; Ciurea, Marius Eugen

    2016-01-01

    G-protein coupled estrogen receptor 1 (GPER), a particular extranuclear estrogen receptor (ER), seems not to be significantly involved in normal female phenotype development but especially associated with severe genital malignancies. This study investigated the GPER expression in different types of normal and abnormal proliferative endometrium, and the correlation with the presence of ERα. GPER was much highly expressed in cytoplasm (than onto cell membrane), contrary to ERα, which was almost exclusively located in the nucleus. Both ERs' densities were higher in columnar epithelial then in stromal cells, according with higher estrogen-sensitivity of epithelial cells. GPER and ERα density decreased as follows: complex endometrial hyperplasia (CEH) > simple endometrial hyperplasia (SHE) > normal proliferative endometrium (NPE) > atypical endometrial hyperplasia (AEH), ERα' density being constantly higher. In endometrial adenocarcinomas, both ERs were significant lower expressed, and widely varied, but GPER÷ERα ratio was significantly increased in high-grade lesions. The nuclear ERα is responsible for the genomic (the most important) mechanism of action of estrogens, involved in cell growth and multiplication. In normal and benign proliferations, ERα expression is increased as an evidence of its effects on cells with conserved architecture, in atypical and especially in malignant cells ERα's (and GPER's) density being much lower. Cytoplasmic GPER probably interfere with different tyrosine÷protein kinases signaling pathways, also involved in cell growth and proliferation. In benign endometrial lesions, GPER's presence is, at least partially, the result of an inductor effect of ERα on GPER gene transcription. In high-grade lesions, GPER÷ERα ratio was increased, demonstrating that GPER is involved per se in malignant endometrial proliferations.

  12. Petrophysical rock properties of the Bazhenov Formation of the South-Eastern part of Kaymysovsky Vault (Tomsk Region)

    NASA Astrophysics Data System (ADS)

    Gorshkov, A. M.; Kudryashova, L. K.; Lee-Van-Khe, O. S.

    2016-09-01

    The article presents the results of studying petrophysical rock properties of the Bazhenov Formation of the South-Eastern part of Kaymysovsky Vault with the Gas Research Institute (GRI) method. The authors have constructed dependence charts for bulk and grain density, open porosity and matrix permeability vs. depth. The results of studying petrophysical properties with the GRI method and core description have allowed dividing the entire section into three intervals each of which characterized by different conditions of Bazhenov Formation rock formation. The authors have determined a correlation between the compensated neutron log and the rock density vs. depth chart on the basis of complex well logging and petrophysical section analysis. They have determined a promising interval for producing hydrocarbons from the Bazhenov Formation in the well under study. Besides, they have determined the typical behavior of compensated neutron logs and SP logs on well logs for this interval. These studies will allow re-interpreting available well logs in order to determine the most promising interval to be involved in Bazhenov Formation development in Tomsk Region.

  13. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets. II. Group comparisons

    USGS Publications Warehouse

    Antweiler, Ronald C.

    2015-01-01

    The main classes of statistical treatments that have been used to determine if two groups of censored environmental data arise from the same distribution are substitution methods, maximum likelihood (MLE) techniques, and nonparametric methods. These treatments along with using all instrument-generated data (IN), even those less than the detection limit, were evaluated by examining 550 data sets in which the true values of the censored data were known, and therefore “true” probabilities could be calculated and used as a yardstick for comparison. It was found that technique “quality” was strongly dependent on the degree of censoring present in the groups. For low degrees of censoring (<25% in each group), the Generalized Wilcoxon (GW) technique and substitution of √2/2 times the detection limit gave overall the best results. For moderate degrees of censoring, MLE worked best, but only if the distribution could be estimated to be normal or log-normal prior to its application; otherwise, GW was a suitable alternative. For higher degrees of censoring (each group >40% censoring), no technique provided reliable estimates of the true probability. Group size did not appear to influence the quality of the result, and no technique appeared to become better or worse than other techniques relative to group size. Finally, IN appeared to do very well relative to the other techniques regardless of censoring or group size.

  14. Finding Faults: Tohoku and other Active Megathrusts/Megasplays

    NASA Astrophysics Data System (ADS)

    Moore, J. C.; Conin, M.; Cook, B. J.; Kirkpatrick, J. D.; Remitti, F.; Chester, F.; Nakamura, Y.; Lin, W.; Saito, S.; Scientific Team, E.

    2012-12-01

    Current subduction-fault drilling procedure is to drill a logging hole, identify target faults, then core and instrument them. Seismic data may constrain faults but the additional resolution of borehole logs is necessary for efficient coring and instrumentation under difficult conditions and tight schedules. Thus, refining the methodology of identifying faults in logging data has become important, and thus comparison of log signatures of faults in different locations is worthwhile. At the C0019 (JFAST) drill site, the Tohoku megathrust was principally identified as a decollement where steep cylindrically-folded bedding abruptly flattens below the basal detachment. A similar structural contrast occurs across a megasplay fault in the NanTroSEIZE transect (Site C0004). At the Tohoku decollement, a high gamma-ray value from a pelagic clay layer, predicted as a likely decollement sediment type, strengthens the megathrust interpretation. The original identification of the pelagic clay as a decollement candidate was based on results of previous coring of an oceanic reference site. Negative density anomalies, often seen as low resistivity zones, identified a subsidiary fault in the deformed prism overlying the Tohoku megathrust. Elsewhere, at Barbados, Nankai (Moroto), and Costa Rica, negative density anomalies are associated with the decollement and other faults in hanging walls. Log-based density anomalies in fault zones provide a basis for recognizing in-situ fault zone dilation. At the Tohoku Site C0019, breakouts are present above but not below the megathrust. Changes in breakout orientation and width (stress magnitude) occur across megasplay faults at Sites C0004 and C0010 in the NantroSEIZE transect. Annular pressure anomalies are not apparent at the Tohoku megathrust, but are variably associated with faults and fracture zones drilled along the NanTroSEIZE transect. Overall, images of changes in structural features, negative density anomalies, and changes in breakout occurrence and orientation provide the most common log criteria for recognizing major thrust zones in ocean drilling holes at convergent margins. In the case of JFAST, identification of faults by logging was confirmed during subsequent coring activities, and logging data was critical for successful placement of the observatory down hole.

  15. Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI

    NASA Astrophysics Data System (ADS)

    Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin

    2013-04-01

    Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.

  16. An Analytic Comparison of Effect Sizes for Differential Item Functioning

    ERIC Educational Resources Information Center

    Demars, Christine E.

    2011-01-01

    Three types of effects sizes for DIF are described in this exposition: log of the odds-ratio (differences in log-odds), differences in probability-correct, and proportion of variance accounted for. Using these indices involves conceptualizing the degree of DIF in different ways. This integrative review discusses how these measures are impacted in…

  17. The prisoner's dilemma as a cancer model.

    PubMed

    West, Jeffrey; Hasnain, Zaki; Mason, Jeremy; Newton, Paul K

    2016-09-01

    Tumor development is an evolutionary process in which a heterogeneous population of cells with different growth capabilities compete for resources in order to gain a proliferative advantage. What are the minimal ingredients needed to recreate some of the emergent features of such a developing complex ecosystem? What is a tumor doing before we can detect it? We outline a mathematical model, driven by a stochastic Moran process, in which cancer cells and healthy cells compete for dominance in the population. Each are assigned payoffs according to a Prisoner's Dilemma evolutionary game where the healthy cells are the cooperators and the cancer cells are the defectors. With point mutational dynamics, heredity, and a fitness landscape controlling birth and death rates, natural selection acts on the cell population and simulated 'cancer-like' features emerge, such as Gompertzian tumor growth driven by heterogeneity, the log-kill law which (linearly) relates therapeutic dose density to the (log) probability of cancer cell survival, and the Norton-Simon hypothesis which (linearly) relates tumor regression rates to tumor growth rates. We highlight the utility, clarity, and power that such models provide, despite (and because of) their simplicity and built-in assumptions.

  18. Complexation of the calcium cation with antamanide: an experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Makrlík, Emanuel; Böhm, Stanislav; Vaňura, Petr; Ruzza, Paolo

    2015-06-01

    By using extraction experiments and γ-activity measurements, the extraction constant corresponding to the equilibrium Ca2+(aq) + 1 .Sr2+(nb) ? 1 .Ca2+(nb) + Sr2+(aq) occurring in the two-phase water-nitrobenzene system (1 = antamanide; aq = aqueous phase, nb = nitrobenzene phase) was determined as log Kex (Ca2+, 1 .Sr2+) = 1.6 ± 0.1. Further, the stability constant of the 1 .Ca2+ complex in nitrobenzene saturated with water was calculated for a temperature of 25 °C: log βnb (1 .Ca2+) = 10.9 ± 0.2. Finally, applying quantum mechanical density functional level of theory calculations, the most probable structure of the cationic complex species 1 .Ca2+ was derived. In the resulting complex, the 'central' cation Ca2+ is bound by six strong bonding interactions to the corresponding six carbonyl oxygen atoms of the parent ligand 1. Besides, the whole 1 .Ca2+ complex structure is stabilised by two intramolecular hydrogen bonds. The interaction energy of the considered 1 .Ca2+ complex, involving the Boys-Bernardi counterpoise corrections of the basis set superposition error, was found to be -1219.3 kJ/mol, confirming the formation of this cationic species.

  19. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  20. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  1. A Feasibility Study of Expanding the F404 Aircraft Engine Repair Capability at the Aircraft Intermediate Maintenance Department

    DTIC Science & Technology

    1993-06-01

    1 A. OBJECTIVES ............. .... .................. 1 B. HISTORY ................... .................... 2 C...utilization, and any additional manpower requirements at the "selected" AIMD’s. B. HISTORY Until late 1991 both NADEP JAX and NADEP North Island (NORIS...TRIANGULAR OR ALL LOG NORMAL DISTRIBUTIONS FOR SERVICE TIMES AT AIND CECIL FIELD maintenance/ Triangular Log Normal MAZDA Difference Differe•ce Supply

  2. Erosion associated with cable and tractor logging in northwestern California

    Treesearch

    R. M. Rice; P. A. Datzman

    1981-01-01

    Abstract - Erosion and site conditions were measured at 102 logged plots in northwestern California. Erosion averaged 26.8 m 3 /ha. A log-normal distribution was a better fit to the data. The antilog of the mean of the logarithms of erosion was 3.2 m 3 /ha. The Coast District Erosion Hazard Rating was a poor predictor of erosion related to logging. In a new equation...

  3. Reduced density gradient as a novel approach for estimating QSAR descriptors, and its application to 1, 4-dihydropyridine derivatives with potential antihypertensive effects.

    PubMed

    Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G

    2016-12-01

    The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.

  4. zCOSMOS - 10k-bright spectroscopic sample. The bimodality in the galaxy stellar mass function: exploring its evolution with redshift

    NASA Astrophysics Data System (ADS)

    Pozzetti, L.; Bolzonella, M.; Zucca, E.; Zamorani, G.; Lilly, S.; Renzini, A.; Moresco, M.; Mignoli, M.; Cassata, P.; Tasca, L.; Lamareille, F.; Maier, C.; Meneux, B.; Halliday, C.; Oesch, P.; Vergani, D.; Caputi, K.; Kovač, K.; Cimatti, A.; Cucciati, O.; Iovino, A.; Peng, Y.; Carollo, M.; Contini, T.; Kneib, J.-P.; Le Févre, O.; Mainieri, V.; Scodeggio, M.; Bardelli, S.; Bongiorno, A.; Coppa, G.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Kampczyk, P.; Knobel, C.; Le Borgne, J.-F.; Le Brun, V.; Pellò, R.; Perez Montero, E.; Ricciardelli, E.; Silverman, J. D.; Tanaka, M.; Tresse, L.; Abbas, U.; Bottini, D.; Cappi, A.; Guzzo, L.; Koekemoer, A. M.; Leauthaud, A.; Maccagni, D.; Marinoni, C.; McCracken, H. J.; Memeo, P.; Porciani, C.; Scaramella, R.; Scarlata, C.; Scoville, N.

    2010-11-01

    We present the galaxy stellar mass function (GSMF) to redshift z ≃ 1, based on the analysis of about 8500 galaxies with I < 22.5 (AB mag) over 1.4 deg2, which are part of the zCOSMOS-bright 10k spectroscopic sample. We investigate the total GSMF, as well as the contributions of early- and late-type galaxies (ETGs and LTGs, respectively), defined by different criteria (broad-band spectral energy distribution, morphology, spectral properties, or star formation activities). We unveil a galaxy bimodality in the global GSMF, whose shape is more accurately represented by 2 Schechter functions, one linked to the ETG and the other to the LTG populations. For the global population, we confirm a mass-dependent evolution (“mass-assembly downsizing”), i.e., galaxy number density increases with cosmic time by a factor of two between z = 1 and z = 0 for intermediate-to-low mass (log (ℳ/ℳ⊙) ~ 10.5) galaxies but less than 15% for log(ℳ/ℳ⊙) > 11. We find that the GSMF evolution at intermediate-to-low values of ℳ (log (ℳ/ℳ⊙) < 10.6) is mostly explained by the growth in stellar mass driven by smoothly decreasing star formation activities, despite the redder colours predicted in particular at low redshift. The low residual evolution is consistent, on average, with ~0.16 merger per galaxy per Gyr (of which fewer than 0.1 are major), with a hint of a decrease with cosmic time but not a clear dependence on the mass. From the analysis of different galaxy types, we find that ETGs, regardless of the classification method, increase in number density with cosmic time more rapidly with decreasing M, i.e., follow a top-down building history, with a median “building redshift” increasing with mass (z > 1 for log(ℳ/ℳ⊙) > 11), in contrast to hierarchical model predictions. For LTGs, we find that the number density of blue or spiral galaxies with log(ℳ/ℳ⊙) > 10 remains almost constant with cosmic time from z ~ 1. Instead, the most extreme population of star-forming galaxies (with high specific star formation), at intermediate/high-mass, rapidly decreases in number density with cosmic time. Our data can be interpreted as a combination of different effects. Firstly, we suggest a transformation, driven mainly by SFH, from blue, active, spiral galaxies of intermediate mass to blue quiescent and subsequently (1-2 Gyr after) red, passive types of low specific star formation. We find an indication that the complete morphological transformation, probably driven by dynamical processes, into red spheroidal galaxies, occurred on longer timescales or followed after 1-2 Gyr. A continuous replacement of blue galaxies is expected to be accomplished by low-mass active spirals increasing their stellar mass. We estimate the growth rate in number and mass density of the red galaxies at different redshifts and masses. The corresponding fraction of blue galaxies that, at any given time, is transforming into red galaxies per Gyr, due to the quenching of their SFR, is on average ~25% for log(ℳ/ℳ⊙) < 11. We conclude that the build-up of galaxies and in particular of ETGs follows the same downsizing trend with mass (i.e. occurs earlier for high-mass galaxies) as the formation of their stars and follows the converse of the trend predicted by current SAMs. In this scenario, we expect there to be a negligible evolution of the galaxy baryonic mass function (GBMF) for the global population at all masses and a decrease with cosmic time in the GBMF for the blue galaxy population at intermediate-high masses. Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, program 175.A-0839.

  5. Agent-based simulation of a financial market

    NASA Astrophysics Data System (ADS)

    Raberto, Marco; Cincotti, Silvano; Focardi, Sergio M.; Marchesi, Michele

    2001-10-01

    This paper introduces an agent-based artificial financial market in which heterogeneous agents trade one single asset through a realistic trading mechanism for price formation. Agents are initially endowed with a finite amount of cash and a given finite portfolio of assets. There is no money-creation process; the total available cash is conserved in time. In each period, agents make random buy and sell decisions that are constrained by available resources, subject to clustering, and dependent on the volatility of previous periods. The model proposed herein is able to reproduce the leptokurtic shape of the probability density of log price returns and the clustering of volatility. Implemented using extreme programming and object-oriented technology, the simulator is a flexible computational experimental facility that can find applications in both academic and industrial research projects.

  6. Nature and origin of upper crustal seismic velocity fluctuations and associated scaling properties: Combined stochastic analyses of KTB velocity and lithology logs

    USGS Publications Warehouse

    Goff, J.A.; Holliger, K.

    1999-01-01

    The main borehole of the German Continental Deep Drilling Program (KTB) extends over 9000 m into a crystalline upper crust consisting primarily of interlayered gneiss and metabasite. We present a joint analysis of the velocity and lithology logs in an effort to extract the lithology component of the velocity log. Covariance analysis of lithology log, approximated as a binary series, indicates that it may originate from the superposition of two Brownian stochastic processes (fractal dimension 1.5) with characteristic scales of ???2800 m and ???150 m, respectively. Covariance analysis of the velocity fluctuations provides evidence for the superposition of four stochastic process with distinct characteristic scales. The largest two scales are identical to those derived from the lithology, confirming that these scales of velocity heterogeneity are caused by lithology variations. The third characteristic scale, ???20 m, also a Brownian process, is probably related to fracturing based on correlation with the resistivity log. The superposition of these three Brownian processes closely mimics the commonly observed 1/k decay (fractal dimension 2.0) of the velocity power spectrum. The smallest scale process (characteristic scale ???1.7 m) requires a low fractal dimension, ???1.0, and accounts for ???60% of the total rms velocity variation. A comparison of successive logs from 6900-7140 m depth indicates that such variations are not repeatable and thus probably do not represent true velocity variations in the crust. The results of this study resolve disparity between the differing published estimates of seismic heterogeneity based on the KTB sonic logs, and bridge the gap between estimates of crustal heterogeneity from geologic maps and borehole logs. Copyright 1999 by the American Geophysical Union.

  7. Statistics of velocity and temperature fluctuations in two-dimensional Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Huang, Yong-Xiang; Jiang, Nan; Liu, Yu-Lu; Lu, Zhi-Ming; Qiu, Xiang; Zhou, Quan

    2017-08-01

    We investigate fluctuations of the velocity and temperature fields in two-dimensional (2D) Rayleigh-Bénard (RB) convection by means of direct numerical simulations (DNS) over the Rayleigh number range 106≤Ra≤1010 and for a fixed Prandtl number Pr=5.3 and aspect ratio Γ =1 . Our results show that there exists a counter-gradient turbulent transport of energy from fluctuations to the mean flow both locally and globally, implying that the Reynolds stress is one of the driving mechanisms of the large-scale circulation in 2D turbulent RB convection besides the buoyancy of thermal plumes. We also find that the viscous boundary layer (BL) thicknesses near the horizontal conducting plates and near the vertical sidewalls, δu and δv, are almost the same for a given Ra, and they scale with the Rayleigh and Reynolds numbers as ˜Ra-0.26±0.03 and ˜Re-0.43±0.04 . Furthermore, the thermal BL thickness δθ defined based on the root-mean-square (rms) temperature profiles is found to agree with Prandtl-Blasius predictions from the scaling point of view. In addition, the probability density functions of turbulent energy ɛu' and thermal ɛθ' dissipation rates, calculated, respectively, within the viscous and thermal BLs, are found to be always non-log-normal and obey approximately a Bramwell-Holdsworth-Pinton distribution first introduced to characterize rare fluctuations in a confined turbulent flow and critical phenomena.

  8. Research on the physical properties of supercritical CO2 and the log evaluation of CO2-bearing volcanic reservoirs

    NASA Astrophysics Data System (ADS)

    Pan, Baozhi; Lei, Jian; Zhang, Lihua; Guo, Yuhang

    2017-10-01

    CO2-bearing reservoirs are difficult to distinguish from other natural gas reservoirs during gas explorations. Due to the lack of physical parameters for supercritical CO2, particularly neutron porosity, at present a hydrocarbon gas log evaluation method is used to evaluate CO2-bearing reservoirs. The differences in the physical properties of hydrocarbon and CO2 gases have led to serious errors. In this study, the deep volcanic rock of the Songliao Basin was the research area. In accordance with the relationship between the density and acoustic velocity of supercritical CO2 and temperature and pressure, the regularity between the CO2 density and acoustic velocity, and the depth of the area was established. A neutron logging simulation was completed based on a Monte Carlo method. Through the simulation of the wet limestone neutron logging, the relationship between the count rate ratio of short and long space detectors and the neutron porosity was acquired. Then, the nature of the supercritical CO2 neutron moderation was obtained. With consideration given to the complexity of the volcanic rock mineral composition, a volcanic rock volume model was established, and the matrix neutron and density parameters were acquired using the ECS log. The properties of CO2 were applied in the log evaluation of the CO2-bearing volcanic reservoirs in the southern Songliao Basin. The porosity and saturation of CO2 were obtained, and a reasonable application was achieved in the CO2-bearing reservoir.

  9. Comparison of MWD and wireline applications and decision criteria, Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zainun, K.; Redzuan, M.; Said, M.

    1994-07-01

    Since 1987, usage of measurement while drilling (MWD) technology within Esso Production Malaysia Inc. (EPMI) has evolved from an auxiliary directional drilling service to providing a reliable alternative to wireline logs for formation evaluation and well-completion purposes. The shift in EPMI's attitude toward the use of MWD in formation evaluation is attributed to the availability of a complete suite of logging services for the log analysis procedure, accuracy of the data, sufficient control in reservoir quality, and continuity in fields where there are already a high density of wireline-logged wells, increasing number of high angle and horizontal wells being drilled,more » a favorable track record, and realized economic benefits. The in-house analysis procedure, (EPMILOG[sup 6]), requires the availability of a deep and/or shallow investigating resistivity, formation density, neutron porosity, and gamma ray tools for a complete analysis. The availability of these services in MWD and also comparative evaluations of MWD responses with their correlative wireline counterparts show that MWD technology can be used, to a large extent, to complement or replace routine wireline logging services. MWD resistivity measurements are frequently observed to be less effected by mud filtrate invasion than the correlative wireline measurements and are, therefore, closer to the true resistivity of the formation. MWD formation evaluation services are most widely used in fields where there are already a high density of wells that were logged using wireline. The MWD data is used to decide perforation depths and intervals.« less

  10. A comparison of reliability and conventional estimation of safe fatigue life and safe inspection intervals

    NASA Technical Reports Server (NTRS)

    Hooke, F. H.

    1972-01-01

    Both the conventional and reliability analyses for determining safe fatigue life are predicted on a population having a specified (usually log normal) distribution of life to collapse under a fatigue test load. Under a random service load spectrum, random occurrences of load larger than the fatigue test load may confront and cause collapse of structures which are weakened, though not yet to the fatigue test load. These collapses are included in reliability but excluded in conventional analysis. The theory of risk determination by each method is given, and several reasonably typical examples have been worked out, in which it transpires that if one excludes collapse through exceedance of the uncracked strength, the reliability and conventional analyses gave virtually identical probabilities of failure or survival.

  11. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  12. Knot probabilities in random diagrams

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Chapman, Harrison; Mastin, Matt

    2016-10-01

    We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.

  13. Consequence of reputation in the Sznajd consensus model

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Forgerini, Fabricio L.

    2010-07-01

    In this work we study a modified version of the Sznajd sociophysics model. In particular we introduce reputation, a mechanism that limits the capacity of persuasion of the agents. The reputation is introduced as a score which is time-dependent, and its introduction avoid dictatorship (all spins parallel) for a wide range of parameters. The relaxation time follows a log-normal-like distribution. In addition, we show that the usual phase transition also occurs, as in the standard model, and it depends on the initial concentration of individuals following an opinion, occurring at a initial density of up spins greater than 1/2. The transition point is determined by means of a finite-size scaling analysis.

  14. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  15. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  16. Interactions among forest age, valley and channel morphology, and log jams regulate animal production in mountain streams

    NASA Astrophysics Data System (ADS)

    Walters, D. M.; Venarsky, M. P.; Hall, R. O., Jr.; Herdrich, A.; Livers, B.; Winkelman, D.; Wohl, E.

    2014-12-01

    Forest age and local valley morphometry strongly influence the form and function of mountain streams in Colorado. Streams in valleys with old growth forest (>350 years) have extensive log jam complexes that create multi-thread channel reaches with extensive pool habitat and large depositional areas. Streams in younger unmanaged forests (e.g., 120 years old) and intensively managed forests have much fewer log jams and lower wood loads. These are single-thread streams dominated by riffles and with little depositional habitat. We hypothesized that log jam streams would retain more organic matter and have higher metabolism, leading to greater production of stream macroinvertebrates and trout. Log jam reaches should also have greater emergence of adult aquatic insects, and consequently have higher densities of riparian spiders taking advantage of these prey. Surficial organic matter was 3-fold higher in old-growth streams, and these streams had much higher ecosystem respiration. Insect production (g m2 y-1) was similar among forest types, but fish density was four times higher in old-growth streams with copious log jams. However, at the valley scale, insect production (g m-1 valley-1) and trout density (number m-1 valley-1) was 2-fold and 10-fold higher, respectively, in old growth streams. This finding is because multi-thread reaches created by log jams have much greater stream area and stream length per meter of valley than single-thread channels. The more limited response of macroinvertebrates may be related to fish predation. Trout in old growth streams had similar growth rates and higher fat content than fish in other streams in spite of occurring at higher densities and higher elevation/colder temperatures. This suggests that the positive fish effect observed in old growth streams is related to greater availability of invertebrate prey, which is consistent with our original hypothesis. Preliminary analyses suggest that spider densities do not respond strongly to differences in stream morphology, but rather to changes in elevation and associated air temperatures. These results demonstrate strong indirect effects of forest age and valley morphometry on organic matter storage and animal secondary production in streams that is mediated by direct effects associated with the presence or absence of logjams.

  17. Structural and Sequence Stratigraphic Analysis of the Onshore Nile Delta, Egypt.

    NASA Astrophysics Data System (ADS)

    Barakat, Moataz; Dominik, Wilhelm

    2010-05-01

    The Nile Delta is considered the earliest known delta in the world. It was already described by Herodotus in the 5th Century AC. Nowadays; the Nile Delta is an emerging giant gas province in the Middle East with proven gas reserves which have more than doubled in size in the last years. The Nile Delta basin contains a thick sedimentary sequence inferred to extend from Jurassic to recent time. Structural styles and depositional environments varied during this period. Facies architecture and sequence stratigraphy of the Nile Delta are resolved using seismic stratigraphy based on (2D seismic lines) including synthetic seismograms and tying in well log data. Synthetic seismograms were constructed using sonic and density logs. The combination of structural interpretation and sequence stratigraphy of the development of the basin was resolved. Seven chrono-stratigraphic boundaries have been identified and correlated on seismic and well log data. Several unconformity boundaries also identified on seismic lines range from angular to disconformity type. Furthermore, time structure maps, velocity maps, depth structure maps as well as Isopach maps were constructed using seismic lines and log data. Several structural features were identified: normal faults, growth faults, listric faults, secondary antithetic faults and large rotated fault blocks of manly Miocene age. In some cases minor rollover structures could be identified. Sedimentary features such as paleo-channels were distinctively recognized. Typical Sequence stratigraphic features such as incised valley, clinoforms, topsets, offlaps and onlaps are identified and traced on the seismic lines allowing a good insight into sequence stratigraphic history of the Nile Delta most especially in the Miocene to Pliocene clastic sedimentary succession.

  18. Analytical modeling of electron energy loss spectroscopy of graphene: Ab initio study versus extended hydrodynamic model.

    PubMed

    Djordjević, Tijana; Radović, Ivan; Despoja, Vito; Lyon, Keenan; Borka, Duško; Mišković, Zoran L

    2018-01-01

    We present an analytical modeling of the electron energy loss (EEL) spectroscopy data for free-standing graphene obtained by scanning transmission electron microscope. The probability density for energy loss of fast electrons traversing graphene under normal incidence is evaluated using an optical approximation based on the conductivity of graphene given in the local, i.e., frequency-dependent form derived by both a two-dimensional, two-fluid extended hydrodynamic (eHD) model and an ab initio method. We compare the results for the real and imaginary parts of the optical conductivity in graphene obtained by these two methods. The calculated probability density is directly compared with the EEL spectra from three independent experiments and we find very good agreement, especially in the case of the eHD model. Furthermore, we point out that the subtraction of the zero-loss peak from the experimental EEL spectra has a strong influence on the analytical model for the EEL spectroscopy data. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  20. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  1. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  2. Logging methods and peeling of Aspen

    Treesearch

    T. Schantz-Hansen

    1948-01-01

    The logging of forest products is influenced by many factors, including the size of the trees, density of the stand, the soundness of the trees, size of the area logged, topography and soil, weather conditions, the degree of utilization, the skill of the logger and the equipment used, the distance from market, etc. Each of these factors influences not only the method...

  3. Cosmological tests of the Hoyle-Narlikar conformal gravity

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Narlikar, J. V.

    1980-01-01

    For the first time the Hoyle-Narlikar theory with creation of matter and a variable gravitational constant G, is subjected to the following cosmological tests: (1) the magnitude versus z relation, (2) the N(m) versus m relation for quasars, (3) the metric angular diameters versus z relation, (4) the isophotal angles versus z relation, (5) the log N-log S radio source count, and finally (6) the 3 K radiation. It is shown that the theory passes all these tests just as well as the standard cosmology, with the additional advantage that the geometry of the universe is uniquely determined, with a curvature parameter equal to zero. It is also interesting to note that the variability of G affects the log N-log S curve in a way similar to the density evolution introduced in standard cosmologies. The agreement with the data is therefore achieved without recourse to an ad hoc density evolution.

  4. In-situ petrophysical properties of hotspot volcanoes. Results from ODP Leg 197, Detroit Seamount and HSDP II borehole, Hawaii

    NASA Astrophysics Data System (ADS)

    Kock, I.; Pechnig, R.; Buysch, A.; Clauser, C.

    2003-04-01

    During ODP Leg 197 an extensive logging program was run on Site 1203, Detroit Seamount. This seamount is part of the Emperor seamount chain, a continuation of the Hawaiian volcanic chain. Standard ODP/LDEO logging tool strings were used to measure porosity, density, resistivity, p- and s-wave velocities and gamma ray activity. The FMS-tool yielded detailed high resolution resistivity images of the borehole wall. By interpretation and statistical analysis of the logging parameters a petrophysical classification of the drilled rock content could be derived. The pillow lava recovered in the cores exhibits low porosity, low resistivity and high density. This indicates no or very little vesicles in the non-fractured rock unit. Compared to the pillow basalts, subaerial basalts show increasing porosity, gamma ray and potassium content and decreasing density, resistivity and velocity. A basalt with no or little vesicles and a basalt with average or many vesicles can clearly be distinguished. The volcaniclastics show lower resistivity, lower sonic velocities, higher porosities and lower densities than the basalts. Three different rock types can be distinguished within the volcaniclastics: Tuffs, resedimented tephra and breccia. The tuff shows medium porosity and density, low gamma ray and potassium content. The log responses from the resedimented tephra suggest that the tephra is more easily altered than the tuff. The log responses from the breccia lie between the tuff and tephra log responses, but the breccia can clearly be identified in the FMS borehole images. A similar rock content was found in the Hawaiian Scientific Drilling Project borehole. Gamma ray activity, electrical resistivity and sonic velocity were measured down to 2700 mbsl.. Compared to the 72-76 Ma old Detroit seamount basalts, the HSDP subaerial and submarine lava flows show a significant lower gamma ray activity, while sonic velocity and electrical resistivity are comparable. Deviations between the gamma ray activity might be due to the different primary compositions of the melt or to long lasting low temperature alteration. Investigations on this topic are in progress.

  5. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  6. Macular pigment levels do not influence C-Quant retinal straylight estimates in young Caucasians.

    PubMed

    Beirne, Raymond O

    2014-03-01

    Individuals with higher than normal levels of macular pigment optical density (MPOD) are less affected by disability glare, when using glare source lights with a strong short-wavelength component. The aim of this study was to investigate whether estimates of retinal straylight from the Oculus Cataract Quantifier (C-Quant), which corresponds to disability glare, are associated with estimates of macular pigment levels in young Caucasian eyes. Thirty-seven Caucasian individuals (aged 19 to 40 years) with good visual acuity, free from ocular disease and with clear ocular media participated. Macular pigment optical density was measured at 0.5 degrees eccentricity from the foveal centre using a heterochromatic flicker photometry-based densitometer instrument from MacularMetrics. Retinal straylight was estimated using the C-Quant, a commercially available device, which uses a psychophysical compensation comparison method. Mean MPOD was 0.39 ± 0.18 log units (range zero to 0.80) and was not significantly related to age (r = -0.07, p = 0.66). Mean straylight parameter (s) was 1.01 ± 0.09 log units (range 0.86 to 1.21) and was not significantly related to age (r = -0.03, p = 0.86). Although there was a small tendency for straylight measurements to be reduced in individuals with higher levels of MPOD, there was no statistically significant relationship between retinal straylight and MPOD (r = -0.17, p = 0.30). Ocular straylight, estimated by the Oculus C-Quant, is little influenced by macular pigment optical density. As the C-Quant uses balanced (white) lights, it is suggested that the previous findings on the effect of macular pigment critically depend on the use of blue-dominant glare sources. © 2013 The Author. Clinical and Experimental Optometry © 2013 Optometrists Association Australia.

  7. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  8. Post-wildfire logging hinders regeneration and increases fire risk.

    PubMed

    Donato, D C; Fontaine, J B; Campbell, J L; Robinson, W D; Kauffman, J B; Law, B E

    2006-01-20

    We present data from a study of early conifer regeneration and fuel loads after the 2002 Biscuit Fire, Oregon, USA, with and without postfire logging. Natural conifer regeneration was abundant after the high-severity fire. Postfire logging reduced median regeneration density by 71%, significantly increased downed woody fuels, and thus increased short-term fire risk. Additional reduction of fuels is necessary for effective mitigation of fire risk. Postfire logging can be counterproductive to the goals of forest regeneration and fuel reduction.

  9. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  10. Dose-response algorithms for water-borne Pseudomonas aeruginosa folliculitis.

    PubMed

    Roser, D J; Van Den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A

    2015-05-01

    We developed two dose-response algorithms for P. aeruginosa pool folliculitis using bacterial and lesion density estimates, associated with undetectable, significant, and almost certain folliculitis. Literature data were fitted to Furumoto & Mickey's equations, developed for plant epidermis-invading pathogens: N l = A ln(1 + BC) (log-linear model); P inf = 1-e(-r c C) (exponential model), where A and B are 2.51644 × 107 lesions/m2 and 2.28011 × 10-11 c.f.u./ml P. aeruginosa, respectively; C = pathogen density (c.f.u./ml), N l = folliculitis lesions/m2, P inf = probability of infection, and r C = 4·3 × 10-7 c.f.u./ml P. aeruginosa. Outbreak data indicates these algorithms apply to exposure durations of 41 ± 25 min. Typical water quality benchmarks (≈10-2 c.f.u./ml) appear conservative but still useful as the literature indicated repeated detection likely implies unstable control barriers and bacterial bloom potential. In future, culture-based outbreak testing should be supplemented with quantitative polymerase chain reaction and organic carbon assays, and quantification of folliculitis aetiology to better understand P. aeruginosa risks.

  11. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    NASA Astrophysics Data System (ADS)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  12. Mechanism-based model for tumor drug resistance.

    PubMed

    Kuczek, T; Chan, T C

    1992-01-01

    The development of tumor resistance to cytotoxic agents has important implications in the treatment of cancer. If supported by experimental data, mathematical models of resistance can provide useful information on the underlying mechanisms and aid in the design of therapeutic regimens. We report on the development of a model of tumor-growth kinetics based on the assumption that the rates of cell growth in a tumor are normally distributed. We further assumed that the growth rate of each cell is proportional to its rate of total pyrimidine synthesis (de novo plus salvage). Using an ovarian carcinoma cell line (2008) and resistant variants selected for chronic exposure to a pyrimidine antimetabolite, N-phosphonacetyl-L-aspartate (PALA), we derived a simple and specific analytical form describing the growth curves generated in 72 h growth assays. The model assumes that the rate of de novo pyrimidine synthesis, denoted alpha, is shifted down by an amount proportional to the log10 PALA concentration and that cells whose rate of pyrimidine synthesis falls below a critical level, denoted alpha 0, can no longer grow. This is described by the equation: Probability (growth) = probability (alpha 0 less than alpha-constant x log10 [PALA]). This model predicts that when growth curves are plotted on probit paper, they will produce straight lines. This prediction is in agreement with the data we obtained for the 2008 cells. Another prediction of this model is that the same probit plots for the resistant variants should shift to the right in a parallel fashion. Probit plots of the dose-response data obtained for each resistant 2008 line following chronic exposure to PALA again confirmed this prediction. Correlation of the rightward shift of dose responses to uridine transport (r = 0.99) also suggests that salvage metabolism plays a key role in tumor-cell resistance to PALA. Furthermore, the slope of the regression lines enables the detection of synergy such as that observed between dipyridamole and PALA. Although the rate-normal model was used to study the rate of salvage metabolism in PALA resistance in the present study, it may be widely applicable to modeling of other resistance mechanisms such as gene amplification of target enzymes.

  13. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.

  14. Fatigue shifts and scatters heart rate variability in elite endurance athletes.

    PubMed

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.

  15. SWIFT BAT Survey of AGN

    NASA Technical Reports Server (NTRS)

    Tueller, J.; Mushotzky, R. F.; Barthelmy, S.; Cannizzo, J. K.; Gehrels, N.; Markwardt, C. B.; Skinner, G. K.; Winter, L. M.

    2008-01-01

    We present the results1 of the analysis of the first 9 months of data of the Swift BAT survey of AGN in the 14-195 keV band. Using archival X-ray data or follow-up Swift XRT observations, we have identified 129 (103 AGN) of 130 objects detected at [b] > 15deg and with significance > 4.8-delta. One source remains unidentified. These same X-ray data have allowed measurement of the X-ray properties of the objects. We fit a power law to the logN - log S distribution, and find the slope to be 1.42+/-0.14. Characterizing the differential luminosity function data as a broken power law, we find a break luminosity logL*(ergs/s)= 43.85+/-0.26. We obtain a mean photon index 1.98 in the 14-195 keV band, with an rms spread of 0.27. Integration of our luminosity function gives a local volume density of AGN above 10(exp 41) erg/s of 2.4x10(exp -3) Mpc(sup -3), which is about 10% of the total luminous local galaxy density above M* = -19.75. We have obtained X-ray spectra from the literature and from Swift XRT follow-up observations. These show that the distribution of log nH is essentially flat from nH = 10(exp 20)/sq cm to 10(exp 24)/sq cm, with 50% of the objects having column densities of less than 10(exp 22)/sq cm. BAT Seyfert galaxies have a median redshift of 0.03, a maximum log luminosity of 45.1, and approximately half have log nH > 22.

  16. Index map of cross sections through parts of the Appalachian basin (Kentucky, New York, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia): Chapter E.1 in Coal and petroleum resources in the Appalachian basin: distribution, geologic framework, and geochemical character

    USGS Publications Warehouse

    Ryder, Robert T.; Trippi, Michael H.; Ruppert, Leslie F.; Ryder, Robert T.

    2014-01-01

    The appendixes in chapters E.4.1 and E.4.2 include (1) Log ASCII Standard (LAS) files, which encode gamma-ray, neutron, density, and other logs in text files that can be used by most well-logging software programs; and (2) graphic well-log traces. In the appendix to chapter E.4.1, the well-log traces are accompanied by lithologic descriptions with formation tops.

  17. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.

  18. IMPROVED Cr II log(gf ) VALUES AND ABUNDANCE DETERMINATIONS IN THE PHOTOSPHERES OF THE SUN AND METAL-POOR STAR HD 84937

    PubMed Central

    Lawler, J. E.; Sneden, C.; Nave, G.; Den Hartog, E. A.; Emrahođlu, N.; Cowan, J. J.

    2017-01-01

    New emission branching fraction (BF) measurements for 183 lines of the second spectrum of chromium (Cr II) and new radiative lifetime measurements from laser-induced fluorescence for 8 levels of Cr+ are reported. The goals of this study are to improve transition probability measurements in Cr II and reconcile solar and stellar Cr abundance values based on Cr I and Cr II lines. Eighteen spectra from three Fourier Transform Spectrometers supplemented with ultraviolet spectra from a high-resolution echelle spectrometer are used in the BF measurements. Radiative lifetimes from this study and earlier publications are used to convert the BFs into absolute transition probabilities. These new laboratory data are applied to determine the Cr abundance log ε in the Sun and metal-poor star HD 84937. The mean result in the Sun is 〈logε (Cr II)〉 = 5.624±0.009 compared to 〈logε(Cr I)〉 = 5.644 ± 0.006 on a scale with the hydrogen abundance log ε(H) = 12 and with the uncertainty representing only line-to-line scatter. A Saha (ionization balance) test on the photosphere of HD 84937 is also performed, yielding 〈logε(Cr II)〉 = 3.417 ± 0.006 and 〈log ε(Cr I, lower level excitation potential E. P. >30 eV)〉 = 3.3743±30.011 for this dwarf star. We find a correlation of Cr with the iron-peak element Ti, suggesting an associated nucleosynthetic production. Four iron-peak elements (Cr along with Ti, V, and Sc) appear to have a similar (or correlated) production history—other iron-peak elements appear not to be associated with Cr. PMID:28579650

  19. IMPROVED Cr II log(gf ) VALUES AND ABUNDANCE DETERMINATIONS IN THE PHOTOSPHERES OF THE SUN AND METAL-POOR STAR HD 84937.

    PubMed

    Lawler, J E; Sneden, C; Nave, G; Den Hartog, E A; Emrahođlu, N; Cowan, J J

    2017-01-01

    New emission branching fraction (BF) measurements for 183 lines of the second spectrum of chromium (Cr II) and new radiative lifetime measurements from laser-induced fluorescence for 8 levels of Cr + are reported. The goals of this study are to improve transition probability measurements in Cr II and reconcile solar and stellar Cr abundance values based on Cr I and Cr II lines. Eighteen spectra from three Fourier Transform Spectrometers supplemented with ultraviolet spectra from a high-resolution echelle spectrometer are used in the BF measurements. Radiative lifetimes from this study and earlier publications are used to convert the BFs into absolute transition probabilities. These new laboratory data are applied to determine the Cr abundance log ε in the Sun and metal-poor star HD 84937. The mean result in the Sun is 〈log ε (Cr II)〉 = 5.624±0.009 compared to 〈log ε (Cr I)〉 = 5.644 ± 0.006 on a scale with the hydrogen abundance log ε (H) = 12 and with the uncertainty representing only line-to-line scatter. A Saha (ionization balance) test on the photosphere of HD 84937 is also performed, yielding 〈log ε (Cr II)〉 = 3.417 ± 0.006 and 〈log ε (Cr I, lower level excitation potential E. P. >30 eV)〉 = 3.3743±30.011 for this dwarf star. We find a correlation of Cr with the iron-peak element Ti, suggesting an associated nucleosynthetic production. Four iron-peak elements (Cr along with Ti, V, and Sc) appear to have a similar (or correlated) production history-other iron-peak elements appear not to be associated with Cr.

  20. Subsurface Rock Physical Properties by Downhole Loggings - Case Studies of Continental Deep Drilling in Kanto Distinct, Japan

    NASA Astrophysics Data System (ADS)

    Omura, K.

    2014-12-01

    In recent years, many examples of physical logging have been carried out in deep boreholes. The loggings are direct in-situ measurements of rock physical properties under the ground. They provide significant basic data for the geological, geophysical and geotechnical investigations, e.g., tectonic history, seismic wave propagation, and ground motion prediction. Since about 1980's, Natl. Res. Inst. for Earth Sci. and Disast. Prev. (NIED) dug deep boreholes (from 200m to 3000m depth) in sedimentary basin of Kanto distinct, Japan, for purposes of installing seismographs and hydrological instruments, and in-situ stress and pore pressure measurements. At that time, downhole physical loggings were conducted in the boreholes: spontaneous potential, electrical resistance, elastic wave velocity, formation density, neutron porosity, total gamma ray, caliper, temperature loggings. In many cases, digital data values were provided every 2m or 1m or 0.1m. In other cases, we read printed graphs of logging plots and got digital data values. Data from about 30 boreholes are compiled. Especially, particular change of logging data at the depth of an interface between a shallow part (soft sedimentary rock) and a base rock (equivalent to hard pre-Neogene rock) is examined. In this presentation, the correlations among physical properties of rock (especially, formation density, elastic wave velocity and electrical resistance) are introduced and the relation to the lithology is discussed. Formation density, elastic wave velocity and electric resistance data indicate the data are divide in two groups that are higher or lower than 2.5g/cm3: the one correspond to a shallow part and the other correspond to a base rock part. In each group, the elastic wave velocity and electric resistance increase with increase of formation density. However the rates of increases in the shallow part are smaller than in the base rock part. The shallow part has lower degree of solidification and higher porosity than that in the base rock part. It appears differences in the degree of solidification and/or porosity are related to differences in the increasing rates. The present data show that the physical logging data are effective information to explore where the base rock is and what properties of the base rock are different from those in the shallow part.

  1. Understanding the Influence of Turbulence in Imaging Fourier-Transform Spectrometry of Smokestack Plumes

    DTIC Science & Technology

    2011-03-01

    capability of FTS to estimate plume effluent concentrations by comparing intrusive measurements of aircraft engine exhaust with those from an FTS. A... turbojet engine. Temporal averaging was used to reduce SCAs in the spectra, and spatial maps of temperature and concentration were generated. The time...density function ( PDF ) is the de- fined as the derivative of the CDF, and describes the probability of obtaining a given value of X. For a normally

  2. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  3. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  4. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  5. Ultraviolet Survey of CO and H2 in Diffuse Molecular Clouds: The Reflection of Two Photochemistry Regimes in Abundance Relationships

    NASA Astrophysics Data System (ADS)

    Sheffer, Y.; Rogers, M.; Federman, S. R.; Abel, N. P.; Gredel, R.; Lambert, D. L.; Shaw, G.

    2008-11-01

    We carried out a comprehensive far-UV survey of 12CO and H2 column densities along diffuse molecular Galactic sight lines. This sample includes new measurements of CO from HST spectra along 62 sight lines and new measurements of H2 from FUSE data along 58 sight lines. In addition, high-resolution optical data were obtained at the McDonald and European Southern Observatories, yielding new abundances for CH, CH+, and CN along 42 sight lines to aid in interpreting the CO results. These new sight lines were selected according to detectable amounts of CO in their spectra and provide information on both lower density (<=100 cm-3) and higher density diffuse clouds. A plot of log N(CO) versus log N(H2) shows that two power-law relationships are needed for a good fit of the entire sample, with a break located at log N(CO , cm -2) = 14.1 and log N(H2) = 20.4, corresponding to a change in production route for CO in higher density gas. Similar logarithmic plots among all five diatomic molecules reveal additional examples of dual slopes in the cases of CO versus CH (break at log N = 14.1, 13.0), CH+ versus H2 (13.1, 20.3), and CH+ versus CO (13.2, 14.1). We employ both analytical and numerical chemical schemes in order to derive details of the molecular environments. In the denser gas, where C2 and CN molecules also reside, reactions involving C+ and OH are the dominant factor leading to CO formation via equilibrium chemistry. In the low-density gas, where equilibrium chemistry studies have failed to reproduce the abundance of CH+, our numerical analysis shows that nonequilibrium chemistry must be employed for correctly predicting the abundances of both CH+ and CO.

  6. Estimation of the POD function and the LOD of a qualitative microbiological measurement method.

    PubMed

    Wilrich, Cordula; Wilrich, Peter-Theodor

    2009-01-01

    Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.

  7. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  8. THE DARKEST SHADOWS: DEEP MID-INFRARED EXTINCTION MAPPING OF A MASSIVE PROTOCLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Michael J.; Tan, Jonathan C.; Kainulainen, Jouni

    We use deep 8 μm Spitzer-IRAC imaging of massive Infrared Dark Cloud (IRDC) G028.37+00.07 to construct a mid-infrared (MIR) extinction map that probes mass surface densities up to Σ ∼ 1 g cm{sup –2} (A{sub V} ∼ 200 mag), amongst the highest values yet probed by extinction mapping. Merging with an NIR extinction map of the region creates a high dynamic range map that reveals structures down to A{sub V} ∼ 1 mag. We utilize the map to: (1) measure a cloud mass ∼7 × 10{sup 4} M {sub ☉} within a radius of ∼8 pc. {sup 13}CO kinematics indicate thatmore » the cloud is gravitationally bound. It thus has the potential to form one of the most massive young star clusters known in the Galaxy. (2) Characterize the structures of 16 massive cores within the IRDC, finding they can be fit by singular polytropic spheres with ρ∝r{sup −k{sub ρ}} and k {sub ρ} = 1.3 ± 0.3. They have Σ-bar ≃0.1--0.4 g cm{sup −2}—relatively low values that, along with their measured cold temperatures, suggest that magnetic fields, rather than accretion-powered radiative heating, are important for controlling fragmentation of these cores. (3) Determine the Σ (equivalently column density or A{sub V} ) probability distribution function (PDF) for a region that is nearly complete for A{sub V} > 3 mag. The PDF is well fit by a single log-normal with mean A-bar {sub V}≃9 mag, high compared to other known clouds. It does not exhibit a separate high-end power law tail, which has been claimed to indicate the importance of self-gravity. However, we suggest that the PDF does result from a self-similar, self-gravitating hierarchy of structures present over a wide range of scales in the cloud.« less

  9. Relationships between population density, fine-scale genetic structure, mating system and pollen dispersal in a timber tree from African rainforests.

    PubMed

    Duminil, J; Daïnou, K; Kaviriri, D K; Gillet, P; Loo, J; Doucet, J-L; Hardy, O J

    2016-03-01

    Owing to the reduction of population density and/or the environmental changes it induces, selective logging could affect the demography, reproductive biology and evolutionary potential of forest trees. This is particularly relevant in tropical forests where natural population densities can be low and isolated trees may be subject to outcross pollen limitation and/or produce low-quality selfed seeds that exhibit inbreeding depression. Comparing reproductive biology processes and genetic diversity of populations at different densities can provide indirect evidence of the potential impacts of logging. Here, we analysed patterns of genetic diversity, mating system and gene flow in three Central African populations of the self-compatible legume timber species Erythrophleum suaveolens with contrasting densities (0.11, 0.68 and 1.72 adults per ha). The comparison of inbreeding levels among cohorts suggests that selfing is detrimental as inbred individuals are eliminated between seedling and adult stages. Levels of genetic diversity, selfing rates (∼16%) and patterns of spatial genetic structure (Sp ∼0.006) were similar in all three populations. However, the extent of gene dispersal differed markedly among populations: the average distance of pollen dispersal increased with decreasing density (from 200 m in the high-density population to 1000 m in the low-density one). Overall, our results suggest that the reproductive biology and genetic diversity of the species are not affected by current logging practices. However, further investigations need to be conducted in low-density populations to evaluate (1) whether pollen limitation may reduce seed production and (2) the regeneration potential of the species.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Kevin J.; Parke, Stephen J.

    Quantum mechanical interactions between neutrinos and matter along the path of propagation, the Wolfenstein matter effect, are of particular importance for the upcoming long-baseline neutrino oscillation experiments, specifically the Deep Underground Neutrino Experiment (DUNE). Here, we explore specifically what about the matter density profile can be measured by DUNE, considering both the shape and normalization of the profile between the neutrinos' origin and detection. Additionally, we explore the capability of a perturbative method for calculating neutrino oscillation probabilities and whether this method is suitable for DUNE. We also briefly quantitatively explore the ability of DUNE to measure the Earth's mattermore » density, and the impact of performing this measurement on measuring standard neutrino oscillation parameters.« less

  11. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  12. Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-01-01

    This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.

  13. Fluctuations in email size

    NASA Astrophysics Data System (ADS)

    Matsubara, Yoshitsugu; Musashi, Yasuo

    2017-12-01

    The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.

  14. Long-term impacts of selective logging on two Amazonian tree species with contrasting ecological and reproductive characteristics: inferences from Eco-gene model simulations.

    PubMed

    Vinson, C C; Kanashiro, M; Sebbenn, A M; Williams, T C R; Harris, S A; Boshier, D H

    2015-08-01

    The impact of logging and subsequent recovery after logging is predicted to vary depending on specific life history traits of the logged species. The Eco-gene simulation model was used to evaluate the long-term impacts of selective logging over 300 years on two contrasting Brazilian Amazon tree species, Dipteryx odorata and Jacaranda copaia. D. odorata (Leguminosae), a slow growing climax tree, occurs at very low densities, whereas J. copaia (Bignoniaceae) is a fast growing pioneer tree that occurs at high densities. Microsatellite multilocus genotypes of the pre-logging populations were used as data inputs for the Eco-gene model and post-logging genetic data was used to verify the output from the simulations. Overall, under current Brazilian forest management regulations, there were neither short nor long-term impacts on J. copaia. By contrast, D. odorata cannot be sustainably logged under current regulations, a sustainable scenario was achieved by increasing the minimum cutting diameter at breast height from 50 to 100 cm over 30-year logging cycles. Genetic parameters were only slightly affected by selective logging, with reductions in the numbers of alleles and single genotypes. In the short term, the loss of alleles seen in J. copaia simulations was the same as in real data, whereas fewer alleles were lost in D. odorata simulations than in the field. The different impacts and periods of recovery for each species support the idea that ecological and genetic information are essential at species, ecological guild or reproductive group levels to help derive sustainable management scenarios for tropical forests.

  15. Long-term impacts of selective logging on two Amazonian tree species with contrasting ecological and reproductive characteristics: inferences from Eco-gene model simulations

    PubMed Central

    Vinson, C C; Kanashiro, M; Sebbenn, A M; Williams, T CR; Harris, S A; Boshier, D H

    2015-01-01

    The impact of logging and subsequent recovery after logging is predicted to vary depending on specific life history traits of the logged species. The Eco-gene simulation model was used to evaluate the long-term impacts of selective logging over 300 years on two contrasting Brazilian Amazon tree species, Dipteryx odorata and Jacaranda copaia. D. odorata (Leguminosae), a slow growing climax tree, occurs at very low densities, whereas J. copaia (Bignoniaceae) is a fast growing pioneer tree that occurs at high densities. Microsatellite multilocus genotypes of the pre-logging populations were used as data inputs for the Eco-gene model and post-logging genetic data was used to verify the output from the simulations. Overall, under current Brazilian forest management regulations, there were neither short nor long-term impacts on J. copaia. By contrast, D. odorata cannot be sustainably logged under current regulations, a sustainable scenario was achieved by increasing the minimum cutting diameter at breast height from 50 to 100 cm over 30-year logging cycles. Genetic parameters were only slightly affected by selective logging, with reductions in the numbers of alleles and single genotypes. In the short term, the loss of alleles seen in J. copaia simulations was the same as in real data, whereas fewer alleles were lost in D. odorata simulations than in the field. The different impacts and periods of recovery for each species support the idea that ecological and genetic information are essential at species, ecological guild or reproductive group levels to help derive sustainable management scenarios for tropical forests. PMID:24424164

  16. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  17. Intracellular activity of antibiotics in a model of human THP-1 macrophages infected by a Staphylococcus aureus small-colony variant strain isolated from a cystic fibrosis patient: pharmacodynamic evaluation and comparison with isogenic normal-phenotype and revertant strains.

    PubMed

    Nguyen, Hoang Anh; Denis, Olivier; Vergison, Anne; Theunis, Anne; Tulkens, Paul M; Struelens, Marc J; Van Bambeke, Françoise

    2009-04-01

    Small-colony variant (SCV) strains of Staphylococcus aureus show reduced antibiotic susceptibility and intracellular persistence, potentially explaining therapeutic failures. The activities of oxacillin, fusidic acid, clindamycin, gentamicin, rifampin, vancomycin, linezolid, quinupristin-dalfopristin, daptomycin, tigecycline, moxifloxacin, telavancin, and oritavancin have been examined in THP-1 macrophages infected by a stable thymidine-dependent SCV strain in comparison with normal-phenotype and revertant isogenic strains isolated from the same cystic fibrosis patient. The SCV strain grew slowly extracellularly and intracellularly (1- and 0.2-log CFU increase in 24 h, respectively). In confocal and electron microscopy, SCV and the normal-phenotype bacteria remain confined in acid vacuoles. All antibiotics tested, except tigecycline, caused a net reduction in bacterial counts that was both time and concentration dependent. At an extracellular concentration corresponding to the maximum concentration in human serum (total drug), oritavancin caused a 2-log CFU reduction at 24 h; rifampin, moxifloxacin, and quinupristin-dalfopristin caused a similar reduction at 72 h; and all other antibiotics had only a static effect at 24 h and a 1-log CFU reduction at 72 h. In concentration dependence experiments, response to oritavancin was bimodal (two successive plateaus of -0.4 and -3.1 log CFU); tigecycline, moxifloxacin, and rifampin showed maximal effects of -1.1 to -1.7 log CFU; and the other antibiotics produced results of -0.6 log CFU or less. Addition of thymidine restored intracellular growth of the SCV strain but did not modify the activity of antibiotics (except quinupristin-dalfopristin). All drugs (except tigecycline and oritavancin) showed higher intracellular activity against normal or revertant phenotypes than against SCV strains. The data may help rationalizing the design of further studies with intracellular SCV strains.

  18. Portable acuity screening for any school: validation of patched HOTV with amblyopic patients and Bangerter normals.

    PubMed

    Tsao Wu, Maya; Armitage, M Diane; Trujillo, Claire; Trujillo, Anna; Arnold, Laura E; Tsao Wu, Lauren; Arnold, Robert W

    2017-12-04

    We needed to validate and calibrate our portable acuity screening tools so amblyopia could be detected quickly and effectively at school entry. Spiral-bound flip cards and download pdf surround HOTV acuity test box with critical lines were combined with a matching card. Amblyopic patients performed critical line, then threshold acuity which was then compared to patched E-ETDRS acuity. 5 normal subjects wore Bangerter foil goggles to simulate blur for comparative validation. The 31 treated amblyopic eyes showed: logMAR HOTV = 0.97(logMAR E-ETDRS)-0.04 r2 = 0.88. All but two (6%) fell less than 2 lines difference. The five showed logMAR HOTV = 1.09 ((logMAR E-ETDRS) + .15 r2 = 0.63. The critical-line, test box was 98% efficient at screening within one line of 20/40. These tools reliably detected acuity in treated amblyopic patients and Bangerter blurred normal subjects. These free and affordable tools provide sensitive screening for amblyopia in children from public, private and home schools. Changing "pass" criteria to 4 out of 5 would improve sensitivity with somewhat slower testing for all students.

  19. Automated lithology prediction from PGNAA and other geophysical logs.

    PubMed

    Borsaru, M; Zhou, B; Aizawa, T; Karashima, H; Hashimoto, T

    2006-02-01

    Different methods of lithology predictions from geophysical data have been developed in the last 15 years. The geophysical logs used for predicting lithology are the conventional logs: sonic, neutron-neutron, gamma (total natural-gamma) and density (backscattered gamma-gamma). The prompt gamma neutron activation analysis (PGNAA) is another established geophysical logging technique for in situ element analysis of rocks in boreholes. The work described in this paper was carried out to investigate the application of PGNAA to the lithology interpretation. The data interpretation was conducted using the automatic interpretation program LogTrans based on statistical analysis. Limited test suggests that PGNAA logging data can be used to predict the lithology. A success rate of 73% for lithology prediction was achieved from PGNAA logging data only. It can also be used in conjunction with the conventional geophysical logs to enhance the lithology prediction.

  20. Grain coarsening in two-dimensional phase-field models with an orientation field

    NASA Astrophysics Data System (ADS)

    Korbuly, Bálint; Pusztai, Tamás; Henry, Hervé; Plapp, Mathis; Apel, Markus; Gránásy, László

    2017-05-01

    In the literature, contradictory results have been published regarding the form of the limiting (long-time) grain size distribution (LGSD) that characterizes the late stage grain coarsening in two-dimensional and quasi-two-dimensional polycrystalline systems. While experiments and the phase-field crystal (PFC) model (a simple dynamical density functional theory) indicate a log-normal distribution, other works including theoretical studies based on conventional phase-field simulations that rely on coarse grained fields, like the multi-phase-field (MPF) and orientation field (OF) models, yield significantly different distributions. In a recent work, we have shown that the coarse grained phase-field models (whether MPF or OF) yield very similar limiting size distributions that seem to differ from the theoretical predictions. Herein, we revisit this problem, and demonstrate in the case of OF models [R. Kobayashi, J. A. Warren, and W. C. Carter, Physica D 140, 141 (2000), 10.1016/S0167-2789(00)00023-3; H. Henry, J. Mellenthin, and M. Plapp, Phys. Rev. B 86, 054117 (2012), 10.1103/PhysRevB.86.054117] that an insufficient resolution of the small angle grain boundaries leads to a log-normal distribution close to those seen in the experiments and the molecular scale PFC simulations. Our paper indicates, furthermore, that the LGSD is critically sensitive to the details of the evaluation process, and raises the possibility that the differences among the LGSD results from different sources may originate from differences in the detection of small angle grain boundaries.

  1. Studies of Isolated and Non-isolated Photospheric Bright Points in an Active Region Observed by the New Vacuum Solar Telescope

    NASA Astrophysics Data System (ADS)

    Liu, Yanxiao; Xiang, Yongyuan; Erdélyi, Robertus; Liu, Zhong; Li, Dong; Ning, Zongjun; Bi, Yi; Wu, Ning; Lin, Jun

    2018-03-01

    Properties of photospheric bright points (BPs) near an active region have been studied in TiO λ 7058 Å images observed by the New Vacuum Solar Telescope of the Yunnan Observatories. We developed a novel recognition method that was used to identify and track 2010 BPs. The observed evolving BPs are classified into isolated (individual) and non-isolated (where multiple BPs are observed to display splitting and merging behaviors) sets. About 35.1% of BPs are non-isolated. For both isolated and non-isolated BPs, the brightness varies from 0.8 to 1.3 times the average background intensity and follows a Gaussian distribution. The lifetimes of BPs follow a log-normal distribution, with characteristic lifetimes of (267 ± 140) s and (421 ± 255) s, respectively. Their size also follows log-normal distribution, with an average size of about (2.15 ± 0.74) × 104 km2 and (3.00 ± 1.31) × 104 km2 for area, and (163 ± 27) km and (191 ± 40) km for diameter, respectively. Our results indicate that regions with strong background magnetic field have higher BP number density and higher BP area coverage than regions with weak background field. Apparently, the brightness/size of BPs does not depend on the background field. Lifetimes in regions with strong background magnetic field are shorter than those in regions with weak background field, on average.

  2. Forward modeling of gravity data using geostatistically generated subsurface density variations

    USGS Publications Warehouse

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  3. Reduced density due to logging and its consequences on mating system and pollen flow in the African mahogany Entandrophragma cylindricum.

    PubMed

    Lourmas, M; Kjellberg, F; Dessard, H; Joly, H I; Chevallier, M-H

    2007-08-01

    In tropical forests, selective logging removes large trees that are often the main contributors to pollination. We studied pollination patterns of the African mahogany, Entandrophragma cylindricum (Sapelli). We investigated two plots in Cameroon corresponding to three tree densities: unlogged forest (Ndama 2002), a mildly logged forest 1 year after logging (Ndama 2003) and a severely logged forest 30 years after logging (Dimako). We used four microsatellite markers to perform paternity analysis. Selfing remained below 2% in all treatments. Pollen flow was mainly long distance but with some proximity effects. Average observed within-plot pollination distances were 338, 266 and 385 m, and pollination by trees outside the plots was 70% (Ndama 2002), 74% (Ndama 2003) and 66% (Dimako). Despite sampling a limited number of seeds from a limited number of mother trees, we obtained seeds sired by 35.6-38.3% of the potential within-plot pollen donors. While trees 20 cm in diameter contributed to pollination, results in Dimako suggest that individual larger trees contribute more to pollination than small ones. This effect was not detected in the other treatments. The results suggest extensive pollen flow in Sapelli. Hence, in Sapelli, the main limiting factor for regeneration after logging may be a reduction in the number of trees capable of producing seeds rather genetic effects due to limits to pollen dispersal.

  4. Venous gas embolism after an open-water air dive and identical repetitive dive.

    PubMed

    Schellart, N A M; Sterk, W

    2012-01-01

    Decompression tables indicate that a repetitive dive to the same depth as a first dive should be shortened to obtain the same probability of occurrence of decompression sickness (pDCS). Repetition protocols are based on small numbers, a reason for re-examination. Since venous gas embolism (VGE) and pDCS are related, one would expect a higher bubble grade (BG) of VGE after the repetitive dive without reducing bottom time. BGs were determined in 28 divers after a first and an identical repetitive air dive of 40 minutes to 20 meters of sea water. Doppler BG scores were transformed to log number of bubbles/cm2 (logB) to allow numerical analysis. With a previously published model (Model2), pDCS was calculated for the first dive and for both dives together. From pDCS, theoretical logBs were estimated with a pDCS-to-logB model constructed from literature data. However, pDCS the second dive was provided using conditional probability. This was achieved in Model2 and indirectly via tissue saturations. The combination of both models shows a significant increase of logB after the second dive, whereas the measurements showed an unexpected lower logB. These differences between measurements and model expectations are significant (p-values < 0.01). A reason for this discrepancy is uncertain. The most likely speculation would be that the divers, who were relatively old, did not perform physical activity for some days before the first dive. Our data suggest that, wisely, the first dive after a period of no exercise should be performed conservatively, particularly for older divers.

  5. Sparse Learning with Stochastic Composite Optimization.

    PubMed

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  6. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  7. Effects of 1.9 MeV monoenergetic neutrons on Vicia faba chromosomes: microdosimetric considerations.

    PubMed

    Geard, C R

    1980-01-01

    Aerated Vicia faba root meristems were irradiated with 1.9 MeV monoenergetic neutrons. This source of neutrons optimally provides one class of particles (recoil protons) with ranges able to traverse cell nuclei at moderate to high-LET. The volumes of the Vicia faba nuclei were log-normally distributed with a mean of 1100 micrometer3. The yield of chromatid-type aberrations was linear against absorbed dose and near-constant over 5 collection periods (2-12 h), after irradiation. Energy deposition events (recoil protons) determined by microdosimetry were related to cytological changes with the finding that 19% of incident recoil protons initiate visible changes in Vicia faba chromosomes. It is probable that a substantial fraction of recoil proton track length and deposited energy is in insensitive (non-DNA containing) portions of the nuclear volume.

  8. On the origin of heavy-tail statistics in equations of the Nonlinear Schrödinger type

    NASA Astrophysics Data System (ADS)

    Onorato, Miguel; Proment, Davide; El, Gennady; Randoux, Stephane; Suret, Pierre

    2016-09-01

    We study the formation of extreme events in incoherent systems described by the Nonlinear Schrödinger type of equations. We consider an exact identity that relates the evolution of the normalized fourth-order moment of the probability density function of the wave envelope to the rate of change of the width of the Fourier spectrum of the wave field. We show that, given an initial condition characterized by some distribution of the wave envelope, an increase of the spectral bandwidth in the focusing/defocusing regime leads to an increase/decrease of the probability of formation of rogue waves. Extensive numerical simulations in 1D+1 and 2D+1 are also performed to confirm the results.

  9. X-ray binary formation in low-metallicity blue compact dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Brorby, M.; Kaaret, P.; Prestwich, A.

    2014-07-01

    X-rays from binaries in small, metal-deficient galaxies may have contributed significantly to the heating and reionization of the early Universe. We investigate this claim by studying blue compact dwarfs (BCDs) as local analogues to these early galaxies. We constrain the relation of the X-ray luminosity function (XLF) to the star formation rate (SFR) using a Bayesian approach applied to a sample of 25 BCDs. The functional form of the XLF is fixed to that found for near-solar metallicity galaxies and is used to find the probability distribution of the normalization that relates X-ray luminosity to SFR. Our results suggest that the XLF normalization for low-metallicity BCDs (12+log(O/H) < 7.7) is not consistent with the XLF normalization for galaxies with near-solar metallicities, at a confidence level 1-5 × 10- 6. The XLF normalization for the BCDs is found to be 14.5± 4.8 ({M}_{⊙}^{-1} yr), a factor of 9.7 ± 3.2 higher than for near-solar metallicity galaxies. Simultaneous determination of the XLF normalization and power-law index result in estimates of q = 21.2^{+12.2}_{-8.8} ({M}_{⊙}^{-1} yr) and α = 1.89^{+0.41}_{-0.30}, respectively. Our results suggest a significant enhancement in the population of high-mass X-ray binaries in BCDs compared to the near-solar metallicity galaxies. This suggests that X-ray binaries could have been a significant source of heating in the early Universe.

  10. Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia

    NASA Astrophysics Data System (ADS)

    Hussin, F. N.; Rahman, H. A.; Bahar, A.

    2017-09-01

    Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.

  11. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    PubMed

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  12. Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris

    Treesearch

    Michael S. Williams; Jeffrey H. Gove

    2003-01-01

    Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...

  13. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  14. Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the ion-signature prior to the Kobe earthquake and for financial markets.

  15. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  16. Logging damage to residual trees following partial cutting in a green ash-sugarberry stand in the Mississippi Delta

    Treesearch

    James S. Meadows

    1993-01-01

    Partial cutting in bottomland hardwoods to control stand density and species composition sometimes results in logging damage to the lower bole and/or roots of residual trees. If severe, logging damage may lead to a decline in tree vigor, which may subsequently stimulate the production of epicormic branches, causing a decrease in bole quality and an eventual loss in...

  17. Logging Damage to Residual Trees Following Partial Cutting in a Green Ash-Sugarberry Stand in the Mississippi Delta

    Treesearch

    James S. Meadows

    1993-01-01

    Partial cutting in bottomland hardwoods to control stand density and species composition sometimes results in logging damage to the lower bole and/or roots of residual trees. If severe, logging damage may lead to a decline in tree vigor, which may subsequently stimulate the production of epicormic branches, causing a decrease in bole quality and an eventual loss in...

  18. A DEFINITION FOR GIANT PLANETS BASED ON THE MASS–DENSITY RELATIONSHIP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatzes, Artie P.; Rauer, Heike, E-mail: artie@tls-tautenburg.de, E-mail: Heike.Rauer@dlr.de

    We present the mass–density relationship (log M − log ρ) for objects with masses ranging from planets (M ≈ 0.01 M{sub Jup}) to stars (M > 0.08 M{sub ⊙}). This relationship shows three distinct regions separated by a change in slope in the log M − log ρ plane. In particular, objects with masses in the range 0.3 M{sub Jup}–60 M{sub Jup} follow a tight linear relationship with no distinguishing feature to separate the low-mass end (giant planets) from the high-mass end (brown dwarfs). We propose a new definition of giant planets simply based on changes in the slope ofmore » the log M versus log ρ relationship. By this criterion, objects with masses less than ≈0.3 M{sub Jup} are low-mass planets, either icy or rocky. Giant planets cover the mass range 0.3 M{sub Jup}–60 M{sub Jup}. Analogous to the stellar main sequence, objects on the upper end of the giant planet sequence (brown dwarfs) can simply be referred to as “high-mass giant planets,” while planets with masses near that of Jupiter can be called “low-mass giant planets.”.« less

  19. Comparison of fundus autofluorescence with photopic and scotopic fine matrix mapping in patients with retinitis pigmentosa: 4- to 8-year follow-up.

    PubMed

    Robson, Anthony G; Lenassi, Eva; Saihan, Zubin; Luong, Vy A; Fitzke, Fred W; Holder, Graham E; Webster, Andrew R

    2012-09-14

    To assess the significance and evolution of parafoveal rings of high-density fundus autofluorescence (AF) in 12 patients with retinitis pigmentosa (RP). Twelve patients with autosomal recessive RP or Usher syndrome type 2 were ascertained who had a parafoveal ring of high-density AF and a visual acuity of 20/30 or better at baseline. Photopic and scotopic fine matrix mapping (FMM) were performed to test sensitivity across the macula. AF imaging and FMM were repeated after 4 to 8 years and optical coherence tomography (OCT) performed. The size of the AF ring reduced over time and disappeared in one subject. Photopic thresholds were normal over the fovea; thresholds were elevated by 0.6 log units over the ring and by 1.2 log units external to the ring at baseline and differed by less than 0.1 log unit at follow-up. Mild photopic losses close to the internal edge of the ring were detected at baseline or follow-up in all. Mean scotopic thresholds over parafoveal areas within the ring were markedly elevated in 8 of 10 at baseline and were severely elevated in 9 of 11 at follow-up. The eccentricity of the inner edge of the AF ring corresponded closely with the lateral extent of the inner segment ellipsoid band in the OCT image. Ring constriction was largely coincident with progressive centripetal photopic threshold elevation led by worsening of rod photoreceptor function. The rate of constriction differed across patients, and a ring may reach a critical minimum before disappearing, at which stage central visual loss occurs. The structural and functional changes associated with rings of increased autofluorescence confirm that they provide an objective index of macular involvement and may aid the management of RP patients and the monitoring of future treatment efficacy.

  20. Interactive effects of historical logging and fire exclusion on ponderosa pine forest structure in the northern Rockies.

    PubMed

    Naficy, Cameron; Sala, Anna; Keeling, Eric G; Graham, Jon; DeLuca, Thomas H

    2010-10-01

    Increased forest density resulting from decades of fire exclusion is often perceived as the leading cause of historically aberrant, severe, contemporary wildfires and insect outbreaks documented in some fire-prone forests of the western United States. Based on this notion, current U.S. forest policy directs managers to reduce stand density and restore historical conditions in fire-excluded forests to help minimize high-severity disturbances. Historical logging, however, has also caused widespread change in forest vegetation conditions, but its long-term effects on vegetation structure and composition have never been adequately quantified. We document that fire-excluded ponderosa pine forests of the northern Rocky Mountains logged prior to 1960 have much higher average stand density, greater homogeneity of stand structure, more standing dead trees and increased abundance of fire-intolerant trees than paired fire-excluded, unlogged counterparts. Notably, the magnitude of the interactive effect of fire exclusion and historical logging substantially exceeds the effects of fire exclusion alone. These differences suggest that historically logged sites are more prone to severe wildfires and insect outbreaks than unlogged, fire-excluded forests and should be considered a high priority for fuels reduction treatments. Furthermore, we propose that ponderosa pine forests with these distinct management histories likely require distinct restoration approaches. We also highlight potential long-term risks of mechanical stand manipulation in unlogged forests and emphasize the need for a long-term view of fuels management.

  1. Of pacemakers and statistics: the actuarial method extended.

    PubMed

    Dussel, J; Wolbarst, A B; Scott-Millar, R N; Obel, I W

    1980-01-01

    Pacemakers cease functioning because of either natural battery exhaustion (nbe) or component failure (cf). A study of four series of pacemakers shows that a simple extension of the actuarial method, so as to incorporate Normal statistics, makes possible a quantitative differentiation between the two modes of failure. This involves the separation of the overall failure probability density function PDF(t) into constituent parts pdfnbe(t) and pdfcf(t). The approach should allow a meaningful comparison of the characteristics of different pacemaker types.

  2. Evaluation of drought using SPEI drought class transitions and log-linear models for different agro-ecological regions of India

    NASA Astrophysics Data System (ADS)

    Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.

    2017-08-01

    Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.

  3. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  4. Nitrogen Oxides and Ozone from B-747 Measurements (NOXAR) During POLINAT-2 and SONEX: Overview and Case Studies on Continental and Marine Convection

    NASA Technical Reports Server (NTRS)

    Jeker, Dominique P.; Pfister, Lenny; Brunner, Dominik; Boccippio, Dennis J.; Pickering, Kenneth E.; Thompson, Anne M.; Wernli, Heini; Selkirk, Rennie B.; Kondo, Yutaka; Koike, Matoke

    1997-01-01

    In the framework of the project POLINAT 2 (Pollution in the North Atlantic Flight Corridor) we measured NO(x) (NO and NO2) and ozone on 85 flights through the North Atlantic Flight Corridor (NAFC) with a fully automated system permanently installed aboard an in-service Swissair B-747 airliner in the period of August to November 1997. The averaged NO(x) concentrations both in the NAFC and at the U.S. east coast were similar to that measured in autumn 1995 with the same system. The patchy occurrence of NO(x) enhancements up to 3000 pptv over several hundred kilometers (plumes), predominately found over the U.S. east coast lead to a log-normal NO(x) probability density function. In three case studies we examine the origins of such plumes by combining back-trajectories with brightness temperature enhanced (IR) satellite imagery, lightning observations from the U.S. National Lightning Detection Network (NLDN) and the Optical Transient Detector (OTD) satellite. We demonstrate that the location of NO(x) plumes can be well explained with maps of convective influence. We show that the number of lightning flashes in cluster of marine thunderstorms is proportional to the NO(x) concentrations observed several hundred kilometers downwind of the anvil outflows. From the fact that in autumn the NO(x) maximum was found several hundred kilometers off the U.S. east coast, it can be inferred that thunderstorms triggered over the warm Gulf Stream current are major sources for the regional upper tropospheric NO(x) budget in autumn.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Anthony M.; Williams, Liliya L.R.; Hjorth, Jens, E-mail: amyoung@astro.umn.edu, E-mail: llrw@astro.umn.edu, E-mail: jens@dark-cosmology.dk

    One usually thinks of a radial density profile as having a monotonically changing logarithmic slope, such as in NFW or Einasto profiles. However, in two different classes of commonly used systems, this is often not the case. These classes exhibit non-monotonic changes in their density profile slopes which we call oscillations for short. We analyze these two unrelated classes separately. Class 1 consists of systems that have density oscillations and that are defined through their distribution function f ( E ), or differential energy distribution N ( E ), such as isothermal spheres, King profiles, or DARKexp, a theoretically derivedmore » model for relaxed collisionless systems. Systems defined through f ( E ) or N ( E ) generally have density slope oscillations. Class 1 system oscillations can be found at small, intermediate, or large radii but we focus on a limited set of Class 1 systems that have oscillations in the central regions, usually at log( r / r {sub −2}) ∼< −2, where r {sub −2} is the largest radius where d log(ρ)/ d log( r ) = −2. We show that the shape of their N ( E ) can roughly predict the amplitude of oscillations. Class 2 systems which are a product of dynamical evolution, consist of observed and simulated galaxies and clusters, and pure dark matter halos. Oscillations in the density profile slope seem pervasive in the central regions of Class 2 systems. We argue that in these systems, slope oscillations are an indication that a system is not fully relaxed. We show that these oscillations can be reproduced by small modifications to N ( E ) of DARKexp. These affect a small fraction of systems' mass and are confined to log( r / r {sub −2}) ∼< 0. The size of these modifications serves as a potential diagnostic for quantifying how far a system is from being relaxed.« less

  6. Relationships between population density, fine-scale genetic structure, mating system and pollen dispersal in a timber tree from African rainforests

    PubMed Central

    Duminil, J; Daïnou, K; Kaviriri, D K; Gillet, P; Loo, J; Doucet, J-L; Hardy, O J

    2016-01-01

    Owing to the reduction of population density and/or the environmental changes it induces, selective logging could affect the demography, reproductive biology and evolutionary potential of forest trees. This is particularly relevant in tropical forests where natural population densities can be low and isolated trees may be subject to outcross pollen limitation and/or produce low-quality selfed seeds that exhibit inbreeding depression. Comparing reproductive biology processes and genetic diversity of populations at different densities can provide indirect evidence of the potential impacts of logging. Here, we analysed patterns of genetic diversity, mating system and gene flow in three Central African populations of the self-compatible legume timber species Erythrophleum suaveolens with contrasting densities (0.11, 0.68 and 1.72 adults per ha). The comparison of inbreeding levels among cohorts suggests that selfing is detrimental as inbred individuals are eliminated between seedling and adult stages. Levels of genetic diversity, selfing rates (∼16%) and patterns of spatial genetic structure (Sp ∼0.006) were similar in all three populations. However, the extent of gene dispersal differed markedly among populations: the average distance of pollen dispersal increased with decreasing density (from 200 m in the high-density population to 1000 m in the low-density one). Overall, our results suggest that the reproductive biology and genetic diversity of the species are not affected by current logging practices. However, further investigations need to be conducted in low-density populations to evaluate (1) whether pollen limitation may reduce seed production and (2) the regeneration potential of the species. PMID:26696137

  7. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  8. Calculation and evaluation of log-based physical properties in the inner accretionary prism, NanTroSEIZE Site C0002, Nankai Trough, Japan

    NASA Astrophysics Data System (ADS)

    Webb, S. I.; Tudge, J.; Tobin, H. J.

    2013-12-01

    Integrated Ocean Drilling Program (IODP) Expedition 338, the most recently completed drilling stage of the NanTroSEIZE project, targeted the Miocene inner accretionary prism off the coast of southwest Japan. NanTroSEIZE is a multi-stage project in which the main objective is to characterize, sample, and instrument the potentially seismogenic region of the Nankai Trough, an active subduction zone. Understanding the physical properties of the inner accretionary prism will aid in the characterization of the deformation that has taken place and the evolution of stress, fluid pressure, and strain over the deformational history of these sediments and rocks. This study focuses on the estimation of porosity and density from available logs to inform solid and fluid volume estimates at Site C0002 from the sea floor through the Kumano Basin into the accretionary prism. Gamma ray, resistivity, and sonic logs were acquired at Hole C0002F, to a total depth of 2005 mbsf into the inner accretionary prism. Because a density and neutron porosity tool could not be deployed, porosity and density must be estimated using a variety of largely empirical methods. In this study, we calculate estimated porosity and density from both the electrical resistivity and sonic (P-wave velocity) logs collected in Hole C0002F. However, the relationship of these physical properties to the available logs is not straightforward and can be affected by changes in fluid type, salinity, temperature, presence of fractures, and clay mineralogy. To evaluate and calibrate the relationships among these properties, we take advantage of the more extensive suite of LWD data recorded in Hole C0002A at the same drill site, including density and neutron porosity measurements. Data collected in both boreholes overlaps in the interval from 875 - 1400 mbsf in the lower Kumano Basin and across the basin-accretionary wedge boundary. Core-based physical properties are also available across this interval. Through comparison of density and porosity values in intervals where core and LWD data overlap, we calculate porosity and density values and evaluate their uncertainties, developing a best estimate given the specific lithology and pore fluid at this tectonic setting. We then propagate this calibrated estimate to the deeper portions of C0002F where core and LWD density and porosity measurements are unavailable, using the sonic and resistivity data alone.

  9. Predicting clicks of PubMed articles.

    PubMed

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.

  10. Predicting clicks of PubMed articles

    PubMed Central

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386

  11. Fatigue Shifts and Scatters Heart Rate Variability in Elite Endurance Athletes

    PubMed Central

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    Purpose This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in ‘fatigue’ or in ‘no-fatigue’ state in ‘real life’ conditions. Methods 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms2 and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). Results 172 trials were identified as in a ‘fatigue’ and 891 as in ‘no-fatigue’ state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between ‘fatigue’ and ‘no-fatigue’: HRSU (+6.27±0.61 bpm), logTPSU (−0.36±0.04), logLFSU (−0.27±0.04), logHFSU (−0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (−9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (−0.28±0.03), logLFST (−0.29±0.03), logHFST (−0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the ‘fatigue’ state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). Conclusion HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern. PMID:23951198

  12. HerMES: The contribution to the cosmic infrared background from galaxies selected by mass and redshift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viero, M. P.; Moncelsi, L.; Bock, J.

    2013-12-10

    We quantify the fraction of the cosmic infrared background (CIB) that originates from galaxies identified in the UV/optical/near-infrared by stacking 81,250 (∼35.7 arcmin{sup –2}) K-selected sources (K {sub AB} < 24.0) split according to their rest-frame U – V versus V – J colors into 72,216 star-forming and 9034 quiescent galaxies, on maps from Spitzer/MIPS (24 μm), Herschel/PACS (100, 160 μm), Herschel/SPIRE (250, 350, 500 μm), and AzTEC (1100 μm). The fraction of the CIB resolved by our catalog is (69% ± 15%) at 24 μm, (78% ± 17%) at 70 μm, (58% ± 13%) at 100 μm, (78% ±more » 18%) at 160 μm, (80% ± 17%) at 250 μm, (69% ± 14%) at 350 μm, (65% ± 12%) at 500 μm, and (45% ± 8%) at 1100 μm. Of that total, about 95% originates from star-forming galaxies, while the remaining 5% is from apparently quiescent galaxies. The CIB at λ ≲ 200 μm appears to be sourced predominantly from galaxies at z ≲ 1, while at λ ≳ 200 μm the bulk originates from 1 ≲ z ≲ 2. Galaxies with stellar masses log(M/M {sub ☉}) = 9.5-11 are responsible for the majority of the CIB, with those in the log(M/M {sub ☉}) = 9.5-10 bin contributing mostly at λ < 250 μm, and those in the log(M/M {sub ☉}) = 10-11 bin dominating at λ > 350 μm. The contribution from galaxies in the log(M/M {sub ☉}) = 9.0-9.5 (lowest) and log(M/M {sub ☉}) = 11.0-12.0 (highest) stellar-mass bins contribute the least—both of order 5%—although the highest stellar-mass bin is a significant contributor to the luminosity density at z ≳ 2. The luminosities of the galaxies responsible for the CIB shifts from combinations of 'normal' and luminous infrared galaxies (LIRGs) at λ ≲ 160 μm, to LIRGs at 160 ≲ λ ≲ 500 μm, to finally LIRGs and ultra-luminous infrared galaxies at λ ≳ 500 μm. Stacking analyses were performed using SIMSTACK, a novel algorithm designed to account for possible biases in the stacked flux density due to clustering. It is made available to the public at www.astro.caltech.edu/∼viero/viero{sub h}omepage/toolbox.html.« less

  13. HerMES: The Contribution to the Cosmic Infrared Background from Galaxies Selected by Mass and Redshift

    NASA Astrophysics Data System (ADS)

    Viero, M. P.; Moncelsi, L.; Quadri, R. F.; Arumugam, V.; Assef, R. J.; Béthermin, M.; Bock, J.; Bridge, C.; Casey, C. M.; Conley, A.; Cooray, A.; Farrah, D.; Glenn, J.; Heinis, S.; Ibar, E.; Ikarashi, S.; Ivison, R. J.; Kohno, K.; Marsden, G.; Oliver, S. J.; Roseboom, I. G.; Schulz, B.; Scott, D.; Serra, P.; Vaccari, M.; Vieira, J. D.; Wang, L.; Wardlow, J.; Wilson, G. W.; Yun, M. S.; Zemcov, M.

    2013-12-01

    We quantify the fraction of the cosmic infrared background (CIB) that originates from galaxies identified in the UV/optical/near-infrared by stacking 81,250 (~35.7 arcmin-2) K-selected sources (K AB < 24.0) split according to their rest-frame U - V versus V - J colors into 72,216 star-forming and 9034 quiescent galaxies, on maps from Spitzer/MIPS (24 μm), Herschel/PACS (100, 160 μm), Herschel/SPIRE (250, 350, 500 μm), and AzTEC (1100 μm). The fraction of the CIB resolved by our catalog is (69% ± 15%) at 24 μm, (78% ± 17%) at 70 μm, (58% ± 13%) at 100 μm, (78% ± 18%) at 160 μm, (80% ± 17%) at 250 μm, (69% ± 14%) at 350 μm, (65% ± 12%) at 500 μm, and (45% ± 8%) at 1100 μm. Of that total, about 95% originates from star-forming galaxies, while the remaining 5% is from apparently quiescent galaxies. The CIB at λ <~ 200 μm appears to be sourced predominantly from galaxies at z <~ 1, while at λ >~ 200 μm the bulk originates from 1 <~ z <~ 2. Galaxies with stellar masses log(M/M ⊙) = 9.5-11 are responsible for the majority of the CIB, with those in the log(M/M ⊙) = 9.5-10 bin contributing mostly at λ < 250 μm, and those in the log(M/M ⊙) = 10-11 bin dominating at λ > 350 μm. The contribution from galaxies in the log(M/M ⊙) = 9.0-9.5 (lowest) and log(M/M ⊙) = 11.0-12.0 (highest) stellar-mass bins contribute the least—both of order 5%—although the highest stellar-mass bin is a significant contributor to the luminosity density at z >~ 2. The luminosities of the galaxies responsible for the CIB shifts from combinations of "normal" and luminous infrared galaxies (LIRGs) at λ <~ 160 μm, to LIRGs at 160 <~ λ <~ 500 μm, to finally LIRGs and ultra-luminous infrared galaxies at λ >~ 500 μm. Stacking analyses were performed using SIMSTACK, a novel algorithm designed to account for possible biases in the stacked flux density due to clustering. It is made available to the public at www.astro.caltech.edu/~viero/viero_homepage/toolbox.html. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  14. Tobacco outlet density and converted versus native non-daily cigarette use in a national US sample

    PubMed Central

    Kirchner, Thomas R; Anesetti-Rothermel, Andrew; Bennett, Morgane; Gao, Hong; Carlos, Heather; Scheuermann, Taneisha S; Reitzel, Lorraine R; Ahluwalia, Jasjit S

    2017-01-01

    Objective Investigate whether non-daily smokers’ (NDS) cigarette price and purchase preferences, recent cessation attempts, and current intentions to quit are associated with the density of the retail cigarette product landscape surrounding their residential address. Participants Cross-sectional assessment of N=904 converted NDS (CNDS). who previously smoked every day, and N=297 native NDS (NNDS) who only smoked non-daily, drawn from a national panel. Outcome measures Kernel density estimation was used to generate a nationwide probability surface of tobacco outlets linked to participants’ residential ZIP code. Hierarchically nested log-linear models were compared to evaluate associations between outlet density, non-daily use patterns, price sensitivity and quit intentions. Results Overall, NDS in ZIP codes with greater outlet density were less likely than NDS in ZIP codes with lower outlet density to hold 6-month quit intentions when they also reported that price affected use patterns (G2=66.1, p<0.001) and purchase locations (G2=85.2, p<0.001). CNDS were more likely than NNDS to reside in ZIP codes with higher outlet density (G2=322.0, p<0.001). Compared with CNDS in ZIP codes with lower outlet density, CNDS in high-density ZIP codes were more likely to report that price influenced the amount they smoke (G2=43.9, p<0.001), and were more likely to look for better prices (G2=59.3, p<0.001). NDS residing in high-density ZIP codes were not more likely to report that price affected their cigarette brand choice compared with those in ZIP codes with lower density. Conclusions This paper provides initial evidence that the point-of-sale cigarette environment may be differentially associated with the maintenance of CNDS versus NNDS patterns. Future research should investigate how tobacco control efforts can be optimised to both promote cessation and curb the rising tide of non-daily smoking in the USA. PMID:26969172

  15. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  16. Factors affecting the cost of tractor logging in the California Pine Region

    Treesearch

    M.E. Krueger

    1929-01-01

    The past five years have seen a very rapid expansion in the use of tractors for logging in the pine region of California. In 1923, when a previous bulletin of this series was published, steam donkey yarding, with which that study treated, was the prevailing method of yarding. During the season of 1928 probably not less than 60 percent of the timber output of this...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnenfeld, Alessandro; Treu, Tommaso; Marshall, Philip J.

    Here, we investigate the cosmic evolution of the internal structure of massive early-type galaxies over half of the age of the universe. We also perform a joint lensing and stellar dynamics analysis of a sample of 81 strong lenses from the Strong Lensing Legacy Survey and Sloan ACS Lens Survey and combine the results with a hierarchical Bayesian inference method to measure the distribution of dark matter mass and stellar initial mass function (IMF) across the population of massive early-type galaxies. Lensing selection effects are taken into account. Furthermore, we found that the dark matter mass projected within the innermore » 5 kpc increases for increasing redshift, decreases for increasing stellar mass density, but is roughly constant along the evolutionary tracks of early-type galaxies. The average dark matter slope is consistent with that of a Navarro-Frenk-White profile, but is not well constrained. The stellar IMF normalization is close to a Salpeter IMF at log M * = 11.5 and scales strongly with increasing stellar mass. No dependence of the IMF on redshift or stellar mass density is detected. The anti-correlation between dark matter mass and stellar mass density supports the idea of mergers being more frequent in more massive dark matter halos.« less

  18. PHAGE FORMATION IN STAPHYLOCOCCUS MUSCAE CULTURES

    PubMed Central

    Price, Winston H.

    1949-01-01

    1. The total nucleic acid synthesized by normal and by infected S. muscae suspensions is approximately the same. This is true for either lag phase cells or log phase cells. 2. The amount of nucleic acid synthesized per cell in normal cultures increases during the lag period and remains fairly constant during log growth. 3. The amount of nucleic acid synthesized per cell by infected cells increases during the whole course of the infection. 4. Infected cells synthesize less RNA and more DNA than normal cells. The ratio of RNA/DNA is larger in lag phase cells than in log phase cells. 5. Normal cells release neither ribonucleic acid nor desoxyribonucleic acid into the medium. 6. Infected cells release both ribonucleic acid and desoxyribonucleic acid into the medium. The time and extent of release depend upon the physiological state of the cells. 7. Infected lag phase cells may or may not show an increased RNA content. They release RNA, but not DNA, into the medium well before observable cellular lysis and before any virus is liberated. At virus liberation, the cell RNA content falls to a value below that initially present, while DNA, which increased during infection falls to approximately the original value. 8. Infected log cells show a continuous loss of cell RNA and a loss of DNA a short time after infection. At the time of virus liberation the cell RNA value is well below that initially present and the cells begin to lyse. PMID:18139006

  19. Characterizing the Inner Accretionary Prism of the Nankai Trough with 3D Seismic and Logging While Drilling at IODP Site C0002

    NASA Astrophysics Data System (ADS)

    Boston, B.; Moore, G. F.; Jurado, M. J.; Sone, H.; Tobin, H. J.; Saffer, D. M.; Hirose, T.; Toczko, S.; Maeda, L.

    2014-12-01

    The deeper, inner parts of active accretionary prisms have been poorly studied due the lack of drilling data, low seismic image quality and typically thick overlying sediments. Our project focuses on the interior of the Nankai Trough inner accretionary prism using deep scientific drilling and a 3D seismic cube. International Ocean Discovery Program (IODP) Expedition 348 extended the existing riser hole to more than 3000 meters below seafloor (mbsf) at Site C0002. Logging while drilling (LWD) data included gamma ray, resistivity, resistivity image, and sonic logs. LWD analysis of the lower section revealed on the borehole images intense deformation characterized by steep bedding, faults and fractures. Bedding plane orientations were measured throughout, with minor gaps at heavily deformed zones disrupting the quality of the resistivity images. Bedding trends are predominantly steeply dipping (60-90°) to the NW. Interpretation of fractures and faults in the image log revealed the existence of different sets of fractures and faults and variable fracture density, remarkably high at fault zones. Gamma ray, resistivity and sonic logs indicated generally homogenous lithology interpretation along this section, consistent with the "silty-claystone" predominant lithologies described on cutting samples. Drops in sonic velocity were observed at the fault zones defined on borehole images. Seismic reflection interpretation of the deep faults in the inner prism is exceedingly difficult due to a strong seafloor multiple, high-angle bedding dips, and low frequency of the data. Structural reconstructions were employed to test whether folding of seismic horizons in the overlying forearc basin could be from an interpreted paleothrust within the inner prism. We used a trishear-based restoration to estimate fault slip on folded horizons landward of C0002. We estimate ~500 m of slip from a steeply dipping deep thrust within the last ~0.9 Ma. Folding is not found in the Kumano sediments near C0002, where normal faults and tilting dominate the modern basin deformation. Both logging and seismic are consistent in characterizing a heavily deformed inner prism. Most of this deformation must have occurred during or before formation of the overlying modern Kumano forearc basin sediments.

  20. Coal test drilling for the DE-NA-Zin Bisti Area, San Juan County, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, R.W.; Jentgen, R.W.

    1980-01-01

    From October 1978 to June 1979, the US Geological Survey (USGS) drilled 51 test holes, and cored 9 holes, in the vicinity of the Bisti Trading Post in the southwestern part of the San Juan Basin, San Juan County, New Mexico. The drilling was done in response to expressions of interest received by the Bureau of Land Management concerning coal leasing and, in some places, badlands preservation. The object of the drilling was to determine the depth, thickness, extent, and quality of the coal in the Upper Cretaceous Fruitland Formation in northwest New Mexico. The holes were geophysically logged immediatelymore » after drilling. Resistivity spontaneous-potential, and natural gamma logs were run in all of the holes. A high-resolution density log was also run in all holes drilled before January 13, when a logging unit from the USGS in Albuquerque was available. After January 13, the holes were logged by a USGS unit from Casper, Wyoming that lacked density logging capabilities. At nine locations a second hole was drilled, about 20 ft from the first hole, down to selected coal-bearing intervals and the coal beds were cored. A detailed description of each of the cores is given on the page(s) following the logs for each hole. From these coal cores, 32 intervals were selected and submitted to the Department of Energy in Pittsburgh, Pennsylvania, for analysis.« less

Top