Sample records for averaged probability density

  1. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  2. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  3. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  4. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  5. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  6. The risks and returns of stock investment in a financial market

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-03-01

    The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.

  7. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  8. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  9. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

  10. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  11. Density PDFs of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2012-09-01

    The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  12. Multipartite entanglement characterization of a quantum phase transition

    NASA Astrophysics Data System (ADS)

    Costantini, G.; Facchi, P.; Florio, G.; Pascazio, S.

    2007-07-01

    A probability density characterization of multipartite entanglement is tested on the one-dimensional quantum Ising model in a transverse field. The average and second moment of the probability distribution are numerically shown to be good indicators of the quantum phase transition. We comment on multipartite entanglement generation at a quantum phase transition.

  13. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Isaacson, Jeffrey A.; Canizares, Claude R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.

  14. Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface

    NASA Astrophysics Data System (ADS)

    Liu, Tianhui; Fu, Bina; Zhang, Dong H.

    2013-11-01

    The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the title molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.

  15. Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tianhui; Fu, Bina; Zhang, Dong H., E-mail: zhangdh@dicp.ac.cn

    The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the titlemore » molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.« less

  16. Surface Impact Simulations of Helium Nanodroplets

    DTIC Science & Technology

    2015-06-30

    mechanical delocalization of the individual helium atoms in the droplet and the quan- tum statistical effects that accompany the interchange of identical...incorporates the effects of atomic delocaliza- tion by treating individual atoms as smeared-out probability distributions that move along classical...probability density distributions to give effec- tive interatomic potential energy curves that have zero-point averaging effects built into them [25

  17. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  18. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  19. Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.

    A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).

  20. Synoptic observations of Jupiter's radio emissions: Average Statistical properties observed by Voyager

    NASA Technical Reports Server (NTRS)

    Alexander, J. K.; Carr, T. D.; Thieman, J. R.; Schauble, J. J.; Riddle, A. C.

    1980-01-01

    Observations of Jupiter's low frequency radio emissions collected over one month intervals before and after each Voyager encounter were analyzed. Compilations of occurrence probability, average power flux density and average sense of circular polarization are presented as a function of central meridian longitude, phase of Io, and frequency. The results are compared with ground based observations. The necessary geometrical conditions are preferred polarization sense for Io-related decametric emission observed by Voyager from above both the dayside and nightside hemispheres are found to be essentially the same as are observed in Earth based studies. On the other hand, there is a clear local time dependence in the Io-independent decametric emission. Io appears to have an influence on average flux density of the emission down to below 2 MHz. The average power flux density spectrum of Jupiter's emission has a broad peak near 9MHz. Integration of the average spectrum over all frequencies gives a total radiated power for an isotropic source of 4 x 10 to the 11th power W.

  1. Rightfulness of Summation Cut-Offs in the Albedo Problem with Gaussian Fluctuations of the Density of Scatterers

    NASA Astrophysics Data System (ADS)

    Selim, M. M.; Bezák, V.

    2003-06-01

    The one-dimensional version of the radiative transfer problem (i.e. the so-called rod model) is analysed with a Gaussian random extinction function (x). Then the optical length X = 0 Ldx(x) is a Gaussian random variable. The transmission and reflection coefficients, T(X) and R(X), are taken as infinite series. When these series (and also when the series representing T 2(X), T 2(X), R(X)T(X), etc.) are averaged, term by term, according to the Gaussian statistics, the series become divergent after averaging. As it was shown in a former paper by the authors (in Acta Physica Slovaca (2003)), a rectification can be managed when a `modified' Gaussian probability density function is used, equal to zero for X > 0 and proportional to the standard Gaussian probability density for X > 0. In the present paper, the authors put forward an alternative, showing that if the m.s.r. of X is sufficiently small in comparison with & $bar X$ ; , the standard Gaussian averaging is well functional provided that the summation in the series representing the variable T m-j (X)R j (X) (m = 1,2,..., j = 1,...,m) is truncated at a well-chosen finite term. The authors exemplify their analysis by some numerical calculations.

  2. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  3. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  4. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  5. A virtual pebble game to ensemble average graph rigidity.

    PubMed

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.

  6. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  7. Metapopulation extinction risk: dispersal's duplicity.

    PubMed

    Higgins, Kevin

    2009-09-01

    Metapopulation extinction risk is the probability that all local populations are simultaneously extinct during a fixed time frame. Dispersal may reduce a metapopulation's extinction risk by raising its average per-capita growth rate. By contrast, dispersal may raise a metapopulation's extinction risk by reducing its average population density. Which effect prevails is controlled by habitat fragmentation. Dispersal in mildly fragmented habitat reduces a metapopulation's extinction risk by raising its average per-capita growth rate without causing any appreciable drop in its average population density. By contrast, dispersal in severely fragmented habitat raises a metapopulation's extinction risk because the rise in its average per-capita growth rate is more than offset by the decline in its average population density. The metapopulation model used here shows several other interesting phenomena. Dispersal in sufficiently fragmented habitat reduces a metapopulation's extinction risk to that of a constant environment. Dispersal between habitat fragments reduces a metapopulation's extinction risk insofar as local environments are asynchronous. Grouped dispersal raises the effective habitat fragmentation level. Dispersal search barriers raise metapopulation extinction risk. Nonuniform dispersal may reduce the effective fraction of suitable habitat fragments below the extinction threshold. Nonuniform dispersal may make demographic stochasticity a more potent metapopulation extinction force than environmental stochasticity.

  8. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  9. Chaotic attractors and physical measures for some density dependent Leslie population models

    NASA Astrophysics Data System (ADS)

    Ugarcovici, Ilie; Weiss, Howard

    2007-12-01

    Following ecologists' discoveries, mathematicians have begun studying extensions of the ubiquitous age structured Leslie population model that allow some survival probabilities and/or fertility rates to depend on population densities. These nonlinear extensions commonly exhibit very complicated dynamics: through computer studies, some authors have discovered robust Hénon-like strange attractors in several families. Population biologists and demographers frequently wish to average a function over many generations and conclude that the average is independent of the initial population distribution. This type of 'ergodicity' seems to be a fundamental tenet in population biology. In this paper we develop the first rigorous ergodic theoretic framework for density dependent Leslie population models. We study two generation models with Ricker and Hassell (recruitment type) fertility terms. We prove that for some parameter regions these models admit a chaotic (ergodic) attractor which supports a unique physical probability measure. This physical measure, having full Lebesgue measure basin, satisfies in the strongest possible sense the population biologist's requirement for ergodicity in their population models. We use the celebrated work of Wang and Young 2001 Commun. Math. Phys. 218 1-97, and our results are the first applications of their method to biology, ecology or demography.

  10. Broadcasting but not receiving: density dependence considerations for SETI signals

    NASA Astrophysics Data System (ADS)

    Smith, Reginald D.

    2009-04-01

    This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.

  11. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    NASA Astrophysics Data System (ADS)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  12. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  13. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  14. Probability of lek collapse is lower inside sage-grouse Core Areas: Effectiveness of conservation policy for a landscape species.

    PubMed

    Spence, Emma Suzuki; Beck, Jeffrey L; Gregory, Andrew J

    2017-01-01

    Greater sage-grouse (Centrocercus urophasianus) occupy sagebrush (Artemisia spp.) habitats in 11 western states and 2 Canadian provinces. In September 2015, the U.S. Fish and Wildlife Service announced the listing status for sage-grouse had changed from warranted but precluded to not warranted. The primary reason cited for this change of status was that the enactment of new regulatory mechanisms was sufficient to protect sage-grouse populations. One such plan is the 2008, Wyoming Sage Grouse Executive Order (SGEO), enacted by Governor Freudenthal. The SGEO identifies "Core Areas" that are to be protected by keeping them relatively free from further energy development and limiting other forms of anthropogenic disturbances near active sage-grouse leks. Using the Wyoming Game and Fish Department's sage-grouse lek count database and the Wyoming Oil and Gas Conservation Commission database of oil and gas well locations, we investigated the effectiveness of Wyoming's Core Areas, specifically: 1) how well Core Areas encompass the distribution of sage-grouse in Wyoming, 2) whether Core Area leks have a reduced probability of lek collapse, and 3) what, if any, edge effects intensification of oil and gas development adjacent to Core Areas may be having on Core Area populations. Core Areas contained 77% of male sage-grouse attending leks and 64% of active leks. Using Bayesian binomial probability analysis, we found an average 10.9% probability of lek collapse in Core Areas and an average 20.4% probability of lek collapse outside Core Areas. Using linear regression, we found development density outside Core Areas was related to the probability of lek collapse inside Core Areas. Specifically, probability of collapse among leks >4.83 km from inside Core Area boundaries was significantly related to well density within 1.61 km (1-mi) and 4.83 km (3-mi) outside of Core Area boundaries. Collectively, these data suggest that the Wyoming Core Area Strategy has benefited sage-grouse and sage-grouse habitat conservation; however, additional guidelines limiting development densities adjacent to Core Areas may be necessary to effectively protect Core Area populations.

  15. Spatial averaging of a dissipative particle dynamics model for active suspensions

    NASA Astrophysics Data System (ADS)

    Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot

    2018-03-01

    Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.

  16. Current Fluctuations in Stochastic Lattice Gases

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2005-01-01

    We study current fluctuations in lattice gases in the macroscopic limit extending the dynamic approach for density fluctuations developed in previous articles. More precisely, we establish a large deviation theory for the space-time fluctuations of the empirical current which include the previous results. We then estimate the probability of a fluctuation of the average current over a large time interval. It turns out that recent results by Bodineau and Derrida [Phys. Rev. Lett.922004180601] in certain cases underestimate this probability due to the occurrence of dynamical phase transitions.

  17. Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1993-01-01

    Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.

  18. An adaptive density-based routing protocol for flying Ad Hoc networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xueli; Qi, Qian; Wang, Qingwen; Li, Yongqiang

    2017-10-01

    An Adaptive Density-based Routing Protocol (ADRP) for Flying Ad Hoc Networks (FANETs) is proposed in this paper. The main objective is to calculate forwarding probability adaptively in order to increase the efficiency of forwarding in FANETs. ADRP dynamically fine-tunes the rebroadcasting probability of a node for routing request packets according to the number of neighbour nodes. Indeed, it is more interesting to privilege the retransmission by nodes with little neighbour nodes. We describe the protocol, implement it and evaluate its performance using NS-2 network simulator. Simulation results reveal that ADRP achieves better performance in terms of the packet delivery fraction, average end-to-end delay, normalized routing load, normalized MAC load and throughput, which is respectively compared with AODV.

  19. Colloids exposed to random potential energy landscapes: From particle number density to particle-potential and particle-particle interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bewerunge, Jörg; Capellmann, Ronja F.; Platten, Florian

    2016-07-28

    Colloidal particles were exposed to a random potential energy landscape that has been created optically via a speckle pattern. The mean particle density as well as the potential roughness, i.e., the disorder strength, were varied. The local probability density of the particles as well as its main characteristics were determined. For the first time, the disorder-averaged pair density correlation function g{sup (1)}(r) and an analogue of the Edwards-Anderson order parameter g{sup (2)}(r), which quantifies the correlation of the mean local density among disorder realisations, were measured experimentally and shown to be consistent with replica liquid state theory results.

  20. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  1. Adaptive detection of noise signal according to Neumann-Pearson criterion

    NASA Astrophysics Data System (ADS)

    Padiryakov, Y. A.

    1985-03-01

    Optimum detection according to the Neumann-Pearson criterion is considered in the case of a random Gaussian noise signal, stationary during measurement, and a stationary random Gaussian background interference. Detection is based on two samples, their statistics characterized by estimates of their spectral densities, it being a priori known that sample A from the signal channel is either the sum of signal and interference or interference alone and sample B from the reference interference channel is an interference with the same spectral density as that of the interference in sample A for both hypotheses. The probability of correct detection is maximized on the average, first in the 2N-dimensional space of signal spectral density and interference spectral density readings, by fixing the probability of false alarm at each point so as to stabilize it at a constant level against variation of the interference spectral density. Deterministic decision rules are established. The algorithm is then reduced to equivalent detection in the N-dimensional space of the ratio of sample A readings to sample B readings.

  2. Inhomogeneous diffusion and ergodicity breaking induced by global memory effects

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2016-11-01

    We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.

  3. The Feynman-Vernon Influence Functional Approach in QED

    NASA Astrophysics Data System (ADS)

    Biryukov, Alexander; Shleenkov, Mark

    2016-10-01

    In the path integral approach we describe evolution of interacting electromagnetic and fermionic fields by the use of density matrix formalism. The equation for density matrix and transitions probability for fermionic field is obtained as average of electromagnetic field influence functional. We obtain a formula for electromagnetic field influence functional calculating for its various initial and final state. We derive electromagnetic field influence functional when its initial and final states are vacuum. We present Lagrangian for relativistic fermionic field under influence of electromagnetic field vacuum.

  4. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  5. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  6. Modelling detectability of kiore (Rattus exulans) on Aguiguan, Mariana Islands, to inform possible eradication and monitoring efforts

    USGS Publications Warehouse

    Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.

    2011-01-01

    Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.

  7. Launch pad lightning protection effectiveness

    NASA Technical Reports Server (NTRS)

    Stahmann, James R.

    1991-01-01

    Using the striking distance theory that lightning leaders will strike the nearest grounded point on their last jump to earth corresponding to the striking distance, the probability of striking a point on a structure in the presence of other points can be estimated. The lightning strokes are divided into deciles having an average peak current and striking distance. The striking distances are used as radii from the points to generate windows of approach through which the leader must pass to reach a designated point. The projections of the windows on a horizontal plane as they are rotated through all possible angles of approach define an area that can be multiplied by the decile stroke density to arrive at the probability of strokes with the window average striking distance. The sum of all decile probabilities gives the cumulative probability for all strokes. The techniques can be applied to NASA-Kennedy launch pad structures to estimate the lightning protection effectiveness for the crane, gaseous oxygen vent arm, and other points. Streamers from sharp points on the structure provide protection for surfaces having large radii of curvature. The effects of nearby structures can also be estimated.

  8. Study on queueing behavior in pedestrian evacuation by extended cellular automata model

    NASA Astrophysics Data System (ADS)

    Hu, Jun; You, Lei; Zhang, Hong; Wei, Juan; Guo, Yangyong

    2018-01-01

    This paper proposes a pedestrian evacuation model for effective simulation of evacuation efficiency based on extended cellular automata. In the model, pedestrians' momentary transition probability to a target position is defined in terms of the floor field and queueing time, and the critical time is defined as the waiting time threshold in a queue. Queueing time and critical time are derived using Fractal Brownian Motion through analysis of pedestrian arrival characteristics. Simulations using the platform and actual evacuations were conducted to study the relationships among system evacuation time, average system velocity, pedestrian density, flow rate, and critical time. The results demonstrate that at low pedestrian density, evacuation efficiency can be improved through adoption of the shortest route strategy, and critical time has an inverse relationship with average system velocity. Conversely, at higher pedestrian densities, it is better to adopt the shortest queueing time strategy, and critical time is inversely related to flow rate.

  9. Monte Carlo based protocol for cell survival and tumour control probability in BNCT.

    PubMed

    Ye, S J

    1999-02-01

    A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the 10B(n,alpha)7Li reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the 10B(n,alpha)7Li reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of 10(-3)-10(-5) for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with the unmodified neutron spectrum. A dominant effect of cell-killing yield on tumour cell survival demonstrates the importance of choice of boron carrier drug. However, these calculations do not indicate an unambiguous preference for one drug, due to the large overlap of tumour cell survival in the probable ranges of the cell-killing yield for the two drugs. The cell survival value averaged over a bulky tumour volume is used to predict the overall BNCT therapeutic efficacy, using a simple model of tumour control probability (TCP).

  10. Forecasting seeing and parameters of long-exposure images by means of ARIMA

    NASA Astrophysics Data System (ADS)

    Kornilov, Matwey V.

    2016-02-01

    Atmospheric turbulence is the one of the major limiting factors for ground-based astronomical observations. In this paper, the problem of short-term forecasting seeing is discussed. The real data that were obtained by atmospheric optical turbulence (OT) measurements above Mount Shatdzhatmaz in 2007-2013 have been analysed. Linear auto-regressive integrated moving average (ARIMA) models are used for the forecasting. A new procedure for forecasting the image characteristics of direct astronomical observations (central image intensity, full width at half maximum, radius encircling 80 % of the energy) has been proposed. Probability density functions of the forecast of these quantities are 1.5-2 times thinner than the respective unconditional probability density functions. Overall, this study found that the described technique could adequately describe temporal stochastic variations of the OT power.

  11. Using areas of known occupancy to identify sources of variation in detection probability of raptors: taking time lowers replication effort for surveys.

    PubMed

    Murn, Campbell; Holloway, Graham J

    2016-10-01

    Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.

  12. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    NASA Astrophysics Data System (ADS)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  13. Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Szalay, Alex

    2016-03-01

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.

  14. STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less

  15. Quantum and classical dynamics of water dissociation on Ni(111): A test of the site-averaging model in dissociative chemisorption of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bin; Department of Chemical Physics, University of Science and Technology of China, Hefei 230026; Guo, Hua, E-mail: hguo@unm.edu

    Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriatemore » weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.« less

  16. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  17. Light enpolarization by disordered media under partial polarized illumination: the role of cross-scattering coefficients.

    PubMed

    Zerrad, M; Soriano, G; Ghabbach, A; Amra, C

    2013-02-11

    We show how disordered media allow to increase the local degree of polarization (DOP) of an arbitrary (partial) polarized incident beam. The role of cross-scattering coefficients is emphasized, together with the probability density functions (PDF) of the scattering DOP. The average DOP of scattering is calculated versus the incident illumination DOP.

  18. Constructing the AdS dual of a Fermi liquid: AdS black holes with Dirac hair

    NASA Astrophysics Data System (ADS)

    Čubrović, Mihailo; Zaanen, Jan; Schalm, Koenraad

    2011-10-01

    We provide evidence that the holographic dual to a strongly coupled charged Fermi liquid has a non-zero fermion density in the bulk. We show that the pole-strength of the stable quasiparticle characterizing the Fermi surface is encoded in the AdS probability density of a single normalizable fermion wavefunction in AdS. Recalling Migdal's theorem which relates the pole strength to the Fermi-Dirac characteristic discontinuity in the number density at ω F , we conclude that the AdS dual of a Fermi liquid is described by occupied on-shell fermionic modes in AdS. Encoding the occupied levels in the total spatially averaged probability density of the fermion field directly, we show that an AdS Reissner-Nordström black holein a theory with charged fermions has a critical temperature, at which the system undergoes a first-order transition to a black hole with a non-vanishing profile for the bulk fermion field. Thermodynamics and spectral analysis support that the solution with non-zero AdS fermion-profile is the preferred ground state at low temperatures.

  19. The study of PDF turbulence models in combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The accurate prediction of turbulent combustion is still beyond reach for today's computation techniques. It is the consensus of the combustion profession that the predictions of chemically reacting flow were poor if conventional turbulence models were used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature, pressure, and density produces excessively large errors. The probability density function (PDF) method is the only alternative at the present time that uses local instant values of the temperature, density, etc. in predicting chemical reaction rate, and thus it is the only viable approach for turbulent combustion calculations.

  20. Density Fluctuation in Aqueous Solutions and Molecular Origin of Salting-Out Effect for CO 2

    DOE PAGES

    Ho, Tuan Anh; Ilgen, Anastasia

    2017-10-26

    Using molecular dynamics simulation, we studied the density fluctuations and cavity formation probabilities in aqueous solutions and their effect on the hydration of CO 2. With increasing salt concentration, we report an increased probability of observing a larger than the average number of species in the probe volume. Our energetic analyses indicate that the van der Waals and electrostatic interactions between CO 2 and aqueous solutions become more favorable with increasing salt concentration, favoring the solubility of CO 2 (salting in). However, due to the decreasing number of cavities forming when salt concentration is increased, the solubility of CO 2more » decreases. The formation of cavities was found to be the primary control on the dissolution of gas, and is responsible for the observed CO 2 salting-out effect. Finally, our results provide the fundamental understanding of the density fluctuation in aqueous solutions and the molecular origin of the salting-out effect for real gas.« less

  1. Density Fluctuation in Aqueous Solutions and Molecular Origin of Salting-Out Effect for CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Tuan Anh; Ilgen, Anastasia

    Using molecular dynamics simulation, we studied the density fluctuations and cavity formation probabilities in aqueous solutions and their effect on the hydration of CO 2. With increasing salt concentration, we report an increased probability of observing a larger than the average number of species in the probe volume. Our energetic analyses indicate that the van der Waals and electrostatic interactions between CO 2 and aqueous solutions become more favorable with increasing salt concentration, favoring the solubility of CO 2 (salting in). However, due to the decreasing number of cavities forming when salt concentration is increased, the solubility of CO 2more » decreases. The formation of cavities was found to be the primary control on the dissolution of gas, and is responsible for the observed CO 2 salting-out effect. Finally, our results provide the fundamental understanding of the density fluctuation in aqueous solutions and the molecular origin of the salting-out effect for real gas.« less

  2. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  3. Probabilistic distribution and stochastic P-bifurcation of a hybrid energy harvester under colored noise

    NASA Astrophysics Data System (ADS)

    Mokem Fokou, I. S.; Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.

    2018-03-01

    In this manuscript, a hybrid energy harvesting system combining piezoelectric and electromagnetic transduction and subjected to colored noise is investigated. By using the stochastic averaging method, the stationary probability density functions of amplitudes are obtained and reveal interesting dynamics related to the long term behavior of the device. From stationary probability densities, we discuss the stochastic bifurcation through the qualitative change which shows that noise intensity, correlation time and other system parameters can be treated as bifurcation parameters. Numerical simulations are made for a comparison with analytical findings. The Mean first passage time (MFPT) is numerical provided in the purpose to investigate the system stability. By computing the Mean residence time (TMR), we explore the stochastic resonance phenomenon; we show how it is related to the correlation time of colored noise and high output power.

  4. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  5. Thermal nanostructure: An order parameter multiscale ensemble approach

    NASA Astrophysics Data System (ADS)

    Cheluvaraja, S.; Ortoleva, P.

    2010-02-01

    Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.

  6. Statistics of velocity gradients in two-dimensional Navier-Stokes and ocean turbulence.

    PubMed

    Schorghofer, Norbert; Gille, Sarah T

    2002-02-01

    Probability density functions and conditional averages of velocity gradients derived from upper ocean observations are compared with results from forced simulations of the two-dimensional Navier-Stokes equations. Ocean data are derived from TOPEX satellite altimeter measurements. The simulations use rapid forcing on large scales, characteristic of surface winds. The probability distributions of transverse velocity derivatives from the ocean observations agree with the forced simulations, although they differ from unforced simulations reported elsewhere. The distribution and cross correlation of velocity derivatives provide clear evidence that large coherent eddies play only a minor role in generating the observed statistics.

  7. Correlation of cervical endplate strength with CT measured subchondral bone density

    PubMed Central

    Ordway, Nathaniel R.; Lu, Yen-Mou; Zhang, Xingkai; Cheng, Chin-Chang; Fang, Huang

    2007-01-01

    Cervical interbody device subsidence can result in screw breakage, plate dislodgement, and/or kyphosis. Preoperative bone density measurement may be helpful in predicting the complications associated with anterior cervical surgery. This is especially important when a motion preserving device is implanted given the detrimental effect of subsidence on the postoperative segmental motion following disc replacement. To evaluate the structural properties of the cervical endplate and examine the correlation with CT measured trabecular bone density. Eight fresh human cadaver cervical spines (C2–T1) were CT scanned and the average trabecular bone densities of the vertebral bodies (C3–C7) were measured. Each endplate surface was biomechanically tested for regional yield load and stiffness using an indentation test method. Overall average density of the cervical vertebral body trabecular bone was 270 ± 74 mg/cm3. There was no significant difference between levels. The yield load and stiffness from the indentation test of the endplate averaged 139 ± 99 N and 156 ± 52 N/mm across all cervical levels, endplate surfaces, and regional locations. The posterior aspect of the endplate had significantly higher yield load and stiffness in comparison to the anterior aspect and the lateral aspect had significantly higher yield load in comparison to the midline aspect. There was a significant correlation between the average yield load and stiffness of the cervical endplate and the trabecular bone density on regression analysis. Although there are significant regional variations in the endplate structural properties, the average of the endplate yield loads and stiffnesses correlated with the trabecular bone density. Given the morbidity associated with subsidence of interbody devices, a reliable and predictive method of measuring endplate strength in the cervical spine is required. Bone density measures may be used preoperatively to assist in the prediction of the strength of the vertebral endplate. A threshold density measure has yet to be established where the probability of endplate fracture outweighs the benefit of anterior cervical procedure. PMID:17712574

  8. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins.

    PubMed

    Nguyen, Bao Linh; Pettitt, B Montgomery

    2015-04-14

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations.

  9. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  10. Improving experimental phases for strong reflections prior to density modification

    DOE PAGES

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...

    2013-09-20

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  11. Potential of hydraulically induced fractures to communicate with existing wellbores

    NASA Astrophysics Data System (ADS)

    Montague, James A.; Pinder, George F.

    2015-10-01

    The probability that new hydraulically fractured wells drilled within the area of New York underlain by the Marcellus Shale will intersect an existing wellbore is calculated using a statistical model, which incorporates: the depth of a new fracturing well, the vertical growth of induced fractures, and the depths and locations of existing nearby wells. The model first calculates the probability of encountering an existing well in plan view and combines this with the probability of an existing well-being at sufficient depth to intersect the fractured region. Average probability estimates for the entire region of New York underlain by the Marcellus Shale range from 0.00% to 3.45% based upon the input parameters used. The largest contributing parameter on the probability value calculated is the nearby density of wells meaning that due diligence by oil and gas companies during construction in identifying all nearby wells will have the greatest effect in reducing the probability of interwellbore communication.

  12. Stationary swarming motion of active Brownian particles in parabolic external potential

    NASA Astrophysics Data System (ADS)

    Zhu, Wei Qiu; Deng, Mao Lin

    2005-08-01

    We investigate the stationary swarming motion of active Brownian particles in parabolic external potential and coupled to its mass center. Using Monte Carlo simulation we first show that the mass center approaches to rest after a sufficient long period of time. Thus, all the particles of a swarm have identical stationary motion relative to the mass center. Then the stationary probability density obtained by using the stochastic averaging method for quasi integrable Hamiltonian systems in our previous paper for the motion in 4-dimensional phase space of single active Brownian particle with Rayleigh friction model in parabolic potential is used to describe the relative stationary motion of each particle of the swarm and to obtain more probability densities including that for the total energy of the swarm. The analytical results are confirmed by comparing with those from simulation and also shown to be consistent with the existing deterministic exact steady-state solution.

  13. PDF approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1993-01-01

    The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.

  14. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. On the emergence of a generalised Gamma distribution. Application to traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.

    2005-08-01

    This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.

  16. Global warming precipitation accumulation increases above the current-climate cutoff scale

    PubMed Central

    Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-01-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693

  17. Global warming precipitation accumulation increases above the current-climate cutoff scale

    NASA Astrophysics Data System (ADS)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-02-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  18. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  19. Global warming precipitation accumulation increases above the current-climate cutoff scale.

    PubMed

    Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N

    2017-02-07

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  20. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE PAGES

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...

    2017-01-23

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  1. Derivation of the collision probability between orbiting objects The lifetimes of Jupiter's outer moons

    NASA Technical Reports Server (NTRS)

    Kessler, D. J.

    1981-01-01

    A general form is derived for Opik's equations relating to the probability of collision between two orbiting objects to their orbital elements, and used to determine the collisional lifetime of the eight outer moons of Jupiter. The derivation is based on a concept of spatial density, or average number of objects found in a unit volume, and results in a set of equations that are easily applied to a variety of orbital collision problems. When applied to the outer satellites, which are all in irregular orbits, the equations predict a relatively long collisional lifetime for the four retrograde moons (about 270 billon years on the average) and a shorter time for the four posigrade moons (0.9 billion years). This short time is suggestive of a past collision history, and may account for the orbiting dust detected by Pioneers 10 and 11.

  2. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  3. Correlations between polarisation states of W particles in the reaction e - e +→ W - W + at LEP2 energies 189-209 GeV

    NASA Astrophysics Data System (ADS)

    Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; McNulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.

    2009-10-01

    In a study of the reaction e - e +→ W - W + with the DELPHI detector, the probabilities of the two W particles occurring in the joint polarisation states transverse-transverse ( TT), longitudinal-transverse plus transverse-longitudinal ( LT) and longitudinal-longitudinal ( LL) have been determined using the final states WW{rightarrow}lν qbar{q} ( l= e, μ). The two-particle joint polarisation probabilities, i.e. the spin density matrix elements ρ TT , ρ LT , ρ LL , are measured as functions of the W - production angle, θ _{W-}, at an average reaction energy of 198.2 GeV. Averaged over all \\cosθ_{W-}, the following joint probabilities are obtained: bar{ρ}_{TT}=(67±8)%, bar{ρ}_{LT}=(30±8)%, bar{ρ}_{LL}=(3±7)%. These results are in agreement with the Standard Model predictions of 63.0%, 28.9% and 8.1%, respectively. The related polarisation cross-sections σ TT , σ LT and σ LL are also presented.

  4. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined bymore » the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.« less

  6. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  7. Extended Statistical Short-Range Guidance for Peak Wind Speed Analyses at the Shuttle Landing Facility: Phase II Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2003-01-01

    This report describes the results from Phase II of the AMU's Short-Range Statistical Forecasting task for peak winds at the Shuttle Landing Facility (SLF). The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The 45th Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A seven year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. A PC-based Graphical User Interface (GUI) tool was created to display the data quickly.

  8. Assessing environmental DNA detection in controlled lentic systems.

    PubMed

    Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin

    2014-01-01

    Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).

  9. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  10. Variation in human fungiform taste bud densities among regions and subjects.

    PubMed

    Miller, I J

    1986-12-01

    Taste sensitivity is known to vary among regions of the tongue and between subjects. The distribution of taste buds on the human tongue is examined in this report to determine if interregional and intersubject variation of taste bud density might account for some of the variation in human taste sensitivity. The subjects were ten males, aged 22-80 years, who died from acute trauma or an acute cardiovascular episode. Specimens were obtained as anatomical gifts or from autopsy. A sample of tissue about 1 cm2 was taken from the tongue tip and midlateral region; frozen sections were prepared for light microscopy; and serial sections were examined by light microscopy to count the taste buds. The average taste bud (tb) density on the tongue tip was 116 tb/cm2 with a range from 3.6 to 514 among subjects. The number of gustatory papillae on the tip averaged 24.5 papillae/cm2 with a range from 2.4 to 80. Taste bud density in the midregion averaged 25.2 tb/cm2 (range: 0-85.9), and the mean number of gustatory papillae was 8.25/cm2 (range: 0-28). The mean number of taste buds per papilla was 3.8 +/- 2.2 (s.d.) on the tip and 2.6 +/- 1.5 (s.d.) on the midregion. Subjects with the highest taste bud densities on the tip also had the highest densities in the midregion and the highest number of taste buds per papilla. Taste bud density was 4.6 times higher on the tip than the midregion, which probably accounts for some of the regional difference in taste sensitivity.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Gravity anomaly and density structure of the San Andreas fault zone

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Yuen; Rui, Feng; Zhengsheng, Yao; Xingjue, Shi

    1986-01-01

    A densely spaced gravity survey across the San andreas fault zone was conducted near Bear Valley, about 180 km south of San Francisco, along a cross-section where a detailed seismic reflection profile was previously made by McEvilly (1981). With Feng and McEvilly's velocity structure (1983) of the fault zone at this cross-section as a constraint, the density structure of the fault zone is obtained through inversion of the gravity data by a method used by Parker (1973) and Oldenburg (1974). Although the resulting density picture cannot be unique, it is better constrained and contains more detailed information about the structure of the fault than was previously possible. The most striking feature of the resulting density structure is a deeply seated tongue of low-density material within the fault zone, probably representing a wedge of fault gouge between the two moving plates, which projects from the surface to the base of the seismogenic zone. From reasonable assumptions concerning the density of the solid grains and the state of saturation of the fault zone the average porosity of this low-density fault gouge is estimated as about 12%. Stress-induced cracks are not expected to create so much porosity under the pressures in the deep fault zone. Large-scaled removal of fault-zone material by hydrothermal alteration, dissolution, and subsequent fluid transport may have occurred to produce this pronounced density deficiency. In addition, a broad, funnel-shaped belt of low density appears about the upper part of the fault zone, which probably represents a belt of extensively shattered wall rocks.

  12. Divergence of perturbation theory in large scale structures

    NASA Astrophysics Data System (ADS)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  13. Correlations between U.S. county annual cancer incidence and population density.

    PubMed

    Vares, David Ae; St-Pierre, Linda S; Persinger, Michael A

    2015-01-01

    Population density implicitly involves specific distances between living individuals who exhibit biophysical forces and energies. Objective was to investigate major data bases of cancer incidence and population data to help understand the emergent properties of diseases that become apparent only when large populations and areas are considered. Correlation analyses of the annual incidence (years 2007 to 2011) of cancer in counties (2,885) of the U.S. and population densities were convergent with these quantitative predictions and suggested an inflection threshold around 50 people per square mile. The potential role of subtle or even "non-local" factors coupled to averaged population density in the viability and mortality of the human species may serve as alternative explanations to the attribution of malignancy to "chance" factors. Calculations indicated average distances between the electric force dipole of the brains or bodies of human beings generate forces known to affect DNA extension and when distributed over the Compton wavelength of the electron could produce energies sufficient to affect the binding of base nucleotides. An inclusive science of human ecology might benefit from considering subtle forces and energies associated with the individual members within the habitat that could determine the probability of cellular anomalies.

  14. Effects of Acids, Bases, and Heteroatoms on Proximal Radial Distribution Functions for Proteins

    PubMed Central

    Nguyen, Bao Linh; Pettitt, B. Montgomery

    2015-01-01

    The proximal distribution of water around proteins is a convenient method of quantifying solvation. We consider the effect of charged and sulfur-containing amino acid side-chain atoms on the proximal radial distribution function (pRDF) of water molecules around proteins using side-chain analogs. The pRDF represents the relative probability of finding any solvent molecule at a distance from the closest or surface perpendicular protein atom. We consider the near-neighbor distribution. Previously, pRDFs were shown to be universal descriptors of the water molecules around C, N, and O atom types across hundreds of globular proteins. Using averaged pRDFs, a solvent density around any globular protein can be reconstructed with controllable relative error. Solvent reconstruction using the additional information from charged amino acid side-chain atom types from both small models and protein averages reveals the effects of surface charge distribution on solvent density and improves the reconstruction errors relative to simulation. Solvent density reconstructions from the small-molecule models are as effective and less computationally demanding than reconstructions from full macromolecular models in reproducing preferred hydration sites and solvent density fluctuations. PMID:26388706

  15. Photographic mark-recapture analysis of local dynamics within an open population of dolphins.

    PubMed

    Fearnbach, H; Durban, J; Parsons, K; Claridge, D

    2012-07-01

    Identifying demographic changes is important for understanding population dynamics. However, this requires long-term studies of definable populations of distinct individuals, which can be particularly challenging when studying mobile cetaceans in the marine environment. We collected photo-identification data from 19 years (1992-2010) to assess the dynamics of a population of bottlenose dolphins (Tursiops truncatus) restricted to the shallow (<7 m) waters of Little Bahama Bank, northern Bahamas. This population was known to range beyond our study area, so we adopted a Bayesian mixture modeling approach to mark-recapture to identify clusters of individuals that used the area to different extents, and we specifically estimated trends in survival, recruitment, and abundance of a "resident" population with high probabilities of identification. There was a high probability (p= 0.97) of a long-term decrease in the size of this resident population from a maximum of 47 dolphins (95% highest posterior density intervals, HPDI = 29-61) in 1996 to a minimum of just 24 dolphins (95% HPDI = 14-37) in 2009, a decline of 49% (95% HPDI = approximately 5% to approximately 75%). This was driven by low per capita recruitment (average approximately 0.02) that could not compensate for relatively low apparent survival rates (average approximately 0.94). Notably, there was a significant increase in apparent mortality (approximately 5 apparent mortalities vs. approximately 2 on average) in 1999 when two intense hurricanes passed over the study area, with a high probability (p = 0.83) of a drop below the average survival probability (approximately 0.91 in 1999; approximately 0.94, on average). As such, our mark-recapture approach enabled us to make useful inference about local dynamics within an open population of bottlenose dolphins; this should be applicable to other studies challenged by sampling highly mobile individuals with heterogeneous space use.

  16. Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface

    NASA Astrophysics Data System (ADS)

    Liu, Tianhui; Fu, Bina; Zhang, Dong H.

    2014-04-01

    We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitation and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.

  17. Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn

    We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitationmore » and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.« less

  18. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  19. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  20. Constraints on Average Radial Anisotropy in the Lower Mantle

    NASA Astrophysics Data System (ADS)

    Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.

    2014-12-01

    Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.

  1. CARS applications to combustion diagnostics

    NASA Astrophysics Data System (ADS)

    Eckbreth, Alan C.

    1986-01-01

    Attention is given to broadband or multiplex CARS of combustion processes, using pulsed lasers whose intensity is sufficiently great for instantaneous measurement of medium properties. This permits probability density functions to be assembled from a series of single-pulse measurements, on the basis of which the true parameter average and the magnitude of the turbulent fluctuations can be ascertained. CARS measurements have been conducted along these lines in diesel engines, gas turbine combustors, scramjets, and solid rocket propellants.

  2. Gas sorption and barrier properties of polymeric membranes from molecular dynamics and Monte Carlo simulations.

    PubMed

    Cozmuta, Ioana; Blanco, Mario; Goddard, William A

    2007-03-29

    It is important for many industrial processes to design new materials with improved selective permeability properties. Besides diffusion, the molecule's solubility contributes largely to the overall permeation process. This study presents a method to calculate solubility coefficients of gases such as O2, H2O (vapor), N2, and CO2 in polymeric matrices from simulation methods (Molecular Dynamics and Monte Carlo) using first principle predictions. The generation and equilibration (annealing) of five polymer models (polypropylene, polyvinyl alcohol, polyvinyl dichloride, polyvinyl chloride-trifluoroethylene, and polyethylene terephtalate) are extensively described. For each polymer, the average density and Hansen solubilities over a set of ten samples compare well with experimental data. For polyethylene terephtalate, the average properties between a small (n = 10) and a large (n = 100) set are compared. Boltzmann averages and probability density distributions of binding and strain energies indicate that the smaller set is biased in sampling configurations with higher energies. However, the sample with the lowest cohesive energy density from the smaller set is representative of the average of the larger set. Density-wise, low molecular weight polymers tend to have on average lower densities. Infinite molecular weight samples do however provide a very good representation of the experimental density. Solubility constants calculated with two ensembles (grand canonical and Henry's constant) are equivalent within 20%. For each polymer sample, the solubility constant is then calculated using the faster (10x) Henry's constant ensemble (HCE) from 150 ps of NPT dynamics of the polymer matrix. The influence of various factors (bad contact fraction, number of iterations) on the accuracy of Henry's constant is discussed. To validate the calculations against experimental results, the solubilities of nitrogen and carbon dioxide in polypropylene are examined over a range of temperatures between 250 and 650 K. The magnitudes of the calculated solubilities agree well with experimental results, and the trends with temperature are predicted correctly. The HCE method is used to predict the solubility constants at 298 K of water vapor and oxygen. The water vapor solubilities follow more closely the experimental trend of permeabilities, both ranging over 4 orders of magnitude. For oxygen, the calculated values do not follow entirely the experimental trend of permeabilities, most probably because at this temperature some of the polymers are in the glassy regime and thus are diffusion dominated. Our study also concludes large confidence limits are associated with the calculated Henry's constants. By investigating several factors (terminal ends of the polymer chains, void distribution, etc.), we conclude that the large confidence limits are intimately related to the polymer's conformational changes caused by thermal fluctuations and have to be regarded--at least at microscale--as a characteristic of each polymer and the nature of its interaction with the solute. Reducing the mobility of the polymer matrix as well as controlling the distribution of the free (occupiable) volume would act as mechanisms toward lowering both the gas solubility and the diffusion coefficients.

  3. Water dissociating on rigid Ni(100): A quantum dynamics study on a full-dimensional potential energy surface

    NASA Astrophysics Data System (ADS)

    Liu, Tianhui; Chen, Jun; Zhang, Zhaojun; Shen, Xiangjian; Fu, Bina; Zhang, Dong H.

    2018-04-01

    We constructed a nine-dimensional (9D) potential energy surface (PES) for the dissociative chemisorption of H2O on a rigid Ni(100) surface using the neural network method based on roughly 110 000 energies obtained from extensive density functional theory (DFT) calculations. The resulting PES is accurate and smooth, based on the small fitting errors and the good agreement between the fitted PES and the direct DFT calculations. Time dependent wave packet calculations also showed that the PES is very well converged with respect to the fitting procedure. The dissociation probabilities of H2O initially in the ground rovibrational state from 9D quantum dynamics calculations are quite different from the site-specific results from the seven-dimensional (7D) calculations, indicating the importance of full-dimensional quantum dynamics to quantitatively characterize this gas-surface reaction. It is found that the validity of the site-averaging approximation with exact potential holds well, where the site-averaging dissociation probability over 15 fixed impact sites obtained from 7D quantum dynamics calculations can accurately approximate the 9D dissociation probability for H2O in the ground rovibrational state.

  4. Statistical Short-Range Guidance for Peak Wind Speed Forecasts on Kennedy Space Center/Cape Canaveral Air Force Station: Phase I Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.; Merceret, Francis J. (Technical Monitor)

    2002-01-01

    This report describes the results of the ANU's (Applied Meteorology Unit) Short-Range Statistical Forecasting task for peak winds. The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The Keith Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A 7 year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. In all climatologies, the average and peak wind speeds were highly variable in time. This indicated that the development of a peak wind forecasting tool would be difficult. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. The climatologies and PDFs provide tools with which to make peak wind forecasts that are critical to safe operations.

  5. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  6. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  7. Generic dynamical features of quenched interacting quantum systems: Survival probability, density imbalance, and out-of-time-ordered correlator

    NASA Astrophysics Data System (ADS)

    Torres-Herrera, E. J.; García-García, Antonio M.; Santos, Lea F.

    2018-02-01

    We study numerically and analytically the quench dynamics of isolated many-body quantum systems. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional-disordered model with two-body interactions and shown to bound the decay rate of this realistic system. Power-law decays are seen at intermediate times, and dips below the infinite time averages (correlation holes) occur at long times for all three quantities when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. Assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.

  8. Probability of detecting band-tailed pigeons during call-broadcast versus auditory surveys

    USGS Publications Warehouse

    Kirkpatrick, C.; Conway, C.J.; Hughes, K.M.; Devos, J.C.

    2007-01-01

    Estimates of population trend for the interior subspecies of band-tailed pigeon (Patagioenas fasciata fasciata) are not available because no standardized survey method exists for monitoring the interior subspecies. We evaluated 2 potential band-tailed pigeon survey methods (auditory and call-broadcast surveys) from 2002 to 2004 in 5 mountain ranges in southern Arizona, USA, and in mixed-conifer forest throughout the state. Both auditory and call-broadcast surveys produced low numbers of cooing pigeons detected per survey route (x?? ??? 0.67) and had relatively high temporal variance in average number of cooing pigeons detected during replicate surveys (CV ??? 161%). However, compared to auditory surveys, use of call-broadcast increased 1) the percentage of replicate surveys on which ???1 cooing pigeon was detected by an average of 16%, and 2) the number of cooing pigeons detected per survey route by an average of 29%, with this difference being greatest during the first 45 minutes of the morning survey period. Moreover, probability of detecting a cooing pigeon was 27% greater during call-broadcast (0.80) versus auditory (0.63) surveys. We found that cooing pigeons were most common in mixed-conifer forest in southern Arizona and density of male pigeons in mixed-conifer forest throughout the state averaged 0.004 (SE = 0.001) pigeons/ha. Our results are the first to show that call-broadcast increases the probability of detecting band-tailed pigeons (or any species of Columbidae) during surveys. Call-broadcast surveys may provide a useful method for monitoring populations of the interior subspecies of band-tailed pigeon in areas where other survey methods are inappropriate.

  9. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  10. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  11. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.

    2016-08-25

    Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less

  12. Statistics of optical vortex wander on propagation through atmospheric turbulence.

    PubMed

    Gu, Yalong

    2013-04-01

    The transverse position of an optical vortex on propagation through atmospheric turbulence is studied. The probability density of the optical vortex position on a transverse plane in the atmosphere is formulated in weak turbulence by using the Born approximation. With these formulas, the effect of aperture averaging on topological charge detection is investigated. These results provide quantitative guidelines for the design of an optimal detector of topological charge, which has potential application in optical vortex communication systems.

  13. Understanding the Influence of Turbulence in Imaging Fourier-Transform Spectrometry of Smokestack Plumes

    DTIC Science & Technology

    2011-03-01

    capability of FTS to estimate plume effluent concentrations by comparing intrusive measurements of aircraft engine exhaust with those from an FTS. A... turbojet engine. Temporal averaging was used to reduce SCAs in the spectra, and spatial maps of temperature and concentration were generated. The time...density function ( PDF ) is the de- fined as the derivative of the CDF, and describes the probability of obtaining a given value of X. For a normally

  14. Increasing market efficiency in the stock markets

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook

    2008-01-01

    We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.

  15. Spatio-temporal variation in click production rates of beaked whales: Implications for passive acoustic density estimation.

    PubMed

    Warren, Victoria E; Marques, Tiago A; Harris, Danielle; Thomas, Len; Tyack, Peter L; Aguilar de Soto, Natacha; Hickmott, Leigh S; Johnson, Mark P

    2017-03-01

    Passive acoustic monitoring has become an increasingly prevalent tool for estimating density of marine mammals, such as beaked whales, which vocalize often but are difficult to survey visually. Counts of acoustic cues (e.g., vocalizations), when corrected for detection probability, can be translated into animal density estimates by applying an individual cue production rate multiplier. It is essential to understand variation in these rates to avoid biased estimates. The most direct way to measure cue production rate is with animal-mounted acoustic recorders. This study utilized data from sound recording tags deployed on Blainville's (Mesoplodon densirostris, 19 deployments) and Cuvier's (Ziphius cavirostris, 16 deployments) beaked whales, in two locations per species, to explore spatial and temporal variation in click production rates. No spatial or temporal variation was detected within the average click production rate of Blainville's beaked whales when calculated over dive cycles (including silent periods between dives); however, spatial variation was detected when averaged only over vocal periods. Cuvier's beaked whales exhibited significant spatial and temporal variation in click production rates within vocal periods and when silent periods were included. This evidence of variation emphasizes the need to utilize appropriate cue production rates when estimating density from passive acoustic data.

  16. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  17. Spectral characteristics of earth-space paths at 2 and 30 FHz

    NASA Technical Reports Server (NTRS)

    Baxter, R. A.; Hodge, D. B.

    1978-01-01

    Spectral characteristics of 2 and 30 GHz signals received from the Applications Technology Satellite-6 (ATS-6) are analyzed in detail at elevation angles ranging from 0 deg to 44 deg. The spectra of the received signals are characterized by slopes and break frequencies. Statistics of these parameters are presented as probability density functions. Dependence of the spectral characteristics on elevation angle is investigated. The 2 and 30 GHz spectral shapes are contrasted through the use of scatter diagrams. The results are compared with those predicted from turbulence theory. The average spectral slopes are in close agreement with theory, although the departure from the average value at any given elevation angle is quite large.

  18. Weak limit of the three-state quantum walk on the line

    NASA Astrophysics Data System (ADS)

    Falkner, Stefan; Boettcher, Stefan

    2014-07-01

    We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.

  19. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  20. Agricultural pesticide use in California: pesticide prioritization, use densities, and population distributions for a childhood cancer study.

    PubMed Central

    Gunier, R B; Harnly, M E; Reynolds, P; Hertz, A; Von Behren, J

    2001-01-01

    Several studies have suggested an association between childhood cancer and pesticide exposure. California leads the nation in agricultural pesticide use. A mandatory reporting system for all agricultural pesticide use in the state provides information on the active ingredient, amount used, and location. We calculated pesticide use density to quantify agricultural pesticide use in California block groups for a childhood cancer study. Pesticides with similar toxicologic properties (probable carcinogens, possible carcinogens, genotoxic compounds, and developmental or reproductive toxicants) were grouped together for this analysis. To prioritize pesticides, we weighted pesticide use by the carcinogenic and exposure potential of each compound. The top-ranking individual pesticides were propargite, methyl bromide, and trifluralin. We used a geographic information system to calculate pesticide use density in pounds per square mile of total land area for all United States census-block groups in the state. Most block groups (77%) averaged less than 1 pound per square mile of use for 1991-1994 for pesticides classified as probable human carcinogens. However, at the high end of use density (> 90th percentile), there were 493 block groups with more than 569 pounds per square mile. Approximately 170,000 children under 15 years of age were living in these block groups in 1990. The distribution of agricultural pesticide use and number of potentially exposed children suggests that pesticide use density would be of value for a study of childhood cancer. PMID:11689348

  1. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    NASA Astrophysics Data System (ADS)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude (σl<~1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which are physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S3, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated.

  2. Electron emission produced by photointeractions in a slab target

    NASA Technical Reports Server (NTRS)

    Thinger, B. E.; Dayton, J. A., Jr.

    1973-01-01

    The current density and energy spectrum of escaping electrons generated in a uniform plane slab target which is being irradiated by the gamma flux field of a nuclear reactor are calculated by using experimental gamma energy transfer coefficients, electron range and energy relations, and escape probability computations. The probability of escape and the average path length of escaping electrons are derived for an isotropic distribution of monoenergetic photons. The method of estimating the flux and energy distribution of electrons emerging from the surface is outlined, and a sample calculation is made for a 0.33-cm-thick tungsten target located next to the core of a nuclear reactor. The results are to be used as a guide in electron beam synthesis of reactor experiments.

  3. Continental crust

    USGS Publications Warehouse

    Pakiser, L.C.

    1964-01-01

    The structure of the Earth’s crust (the outer shell of the earth above the M-discontinuity) has been intensively studied in many places by use of geophysical methods. The velocity of seismic compressional waves in the crust and in the upper mantle varies from place to place in the conterminous United States. The average crust is thick in the eastern two-thirds of the United States, in which the crustal and upper-mantle velocities tend to be high. The average crust is thinner in the western one-third of the United States, in which these velocities tend to be low. The concept of eastern and western superprovinces can be used to classify these differences. Crustal and upper-mantle densities probably vary directly with compressional-wave velocity, leading to the conclusion that isostasy is accomplished by the variation in densities of crustal and upper-mantle rocks as well as in crustal thickness, and that there is no single, generally valid isostatic model. The nature of the M-discontinuity is still speculative.

  4. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  5. Global carbon sequestration in tidal, saline wetland soils

    USGS Publications Warehouse

    Chmura, G.L.; Anisfeld, S.C.; Cahoon, D.R.; Lynch, J.C.

    2003-01-01

    Wetlands represent the largest component of the terrestrial biological carbon pool and thus play an important role in global carbon cycles. Most global carbon budgets, however, have focused on dry land ecosystems that extend over large areas and have not accounted for the many small, scattered carbon-storing ecosystems such as tidal saline wetlands. We compiled data for 154 sites in mangroves and salt marshes from the western and eastern Atlantic and Pacific coasts, as well as the Indian Ocean, Mediterranean Ocean, and Gulf of Mexico. The set of sites spans a latitudinal range from 22.4??S in the Indian Ocean to 55.5??N in the northeastern Atlantic. The average soil carbon density of mangrove swamps (0.055 ?? 0.004 g cm-3) is significantly higher than the salt marsh average (0.039 ?? 0.003 g cm-3). Soil carbon density in mangrove swamps and Spartina patens marshes declines with increasing average annual temperature, probably due to increased decay rates at higher temperatures. In contrast, carbon sequestration rates were not significantly different between mangrove swamps and salt marshes. Variability in sediment accumulation rates within marshes is a major control of carbon sequestration rates masking any relationship with climatic parameters. Globally, these combined wetlands store at least 44.6 Tg C yr-1 and probably more, as detailed areal inventories are not available for salt marshes in China and South America. Much attention has been given to the role of freshwater wetlands, particularly northern peatlands, as carbon sinks. In contrast to peatlands, salt marshes and mangroves release negligible amounts of greenhouse gases and store more carbon per unit area. Copyright 2003 by the American Geophysical Union.

  6. Averaged kick maps: less noise, more signal…and probably less bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor

    2009-09-01

    Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less

  7. Modelling the Probability of Landslides Impacting Road Networks

    NASA Astrophysics Data System (ADS)

    Taylor, F. E.; Malamud, B. D.

    2012-04-01

    During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.

  8. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  9. Reducing energy intake and energy density for a sustainable diet: a study based on self-selected diets in French adults.

    PubMed

    Masset, Gabriel; Vieux, Florent; Verger, Eric Olivier; Soler, Louis-Georges; Touazi, Djilali; Darmon, Nicole

    2014-06-01

    Studies on theoretical diets are not sufficient to implement sustainable diets in practice because of unknown cultural acceptability. In contrast, self-selected diets can be considered culturally acceptable. The objective was to identify the most sustainable diets consumed by people in everyday life. The diet-related greenhouse gas emissions (GHGE) for self-selected diets of 1918 adults participating in the cross-sectional French national dietary survey Individual and National Survey on Food Consumption (INCA2) were estimated. "Lower-Carbon," "Higher-Quality," and "More Sustainable" diets were defined as having GHGE lower than the overall median value, a probability of adequate nutrition intake (PANDiet) score (a measure of the overall nutritional adequacy of a diet) higher than the overall median value, and a combination of both criteria, respectively. Diet cost, as a proxy for affordability, and energy density were also assessed. More Sustainable diets were consumed by 23% of men and 20% of women, and their GHGE values were 19% and 17% lower than the population average (mean) value, respectively. In comparison with the average value, Lower-Carbon diets achieved a 20% GHGE reduction and lower cost, but they were not sustainable because they had a lower PANDiet score. Higher-Quality diets were not sustainable because of their above-average GHGE and cost. More Sustainable diets had an above-average PANDiet score and a below-average energy density, cost, GHGE, and energy content; the energy share of plant-based products was increased by 20% and 15% compared with the average for men and women, respectively. A strength of this study was that most of the dimensions for "sustainable diets" were considered, ie, not only nutritional quality and GHGE but also affordability and cultural acceptability. A reduction in diet-related GHGE by 20% while maintaining high nutritional quality seems realistic. This goal could be achieved at no extra cost by reducing energy intake and energy density and increasing the share of plant-based products. © 2014 American Society for Nutrition.

  10. Wind energy potential assessment of Cameroon's coastal regions for the installation of an onshore wind farm.

    PubMed

    Arreyndip, Nkongho Ayuketang; Joseph, Ebobenow; David, Afungchui

    2016-11-01

    For the future installation of a wind farm in Cameroon, the wind energy potentials of three of Cameroon's coastal cities (Kribi, Douala and Limbe) are assessed using NASA average monthly wind data for 31 years (1983-2013) and compared through Weibull statistics. The Weibull parameters are estimated by the method of maximum likelihood, the mean power densities, the maximum energy carrying wind speeds and the most probable wind speeds are also calculated and compared over these three cities. Finally, the cumulative wind speed distributions over the wet and dry seasons are also analyzed. The results show that the shape and scale parameters for Kribi, Douala and Limbe are 2.9 and 2.8, 3.9 and 1.8 and 3.08 and 2.58, respectively. The mean power densities through Weibull analysis for Kribi, Douala and Limbe are 33.7 W/m2, 8.0 W/m2 and 25.42 W/m2, respectively. Kribi's most probable wind speed and maximum energy carrying wind speed was found to be 2.42 m/s and 3.35 m/s, 2.27 m/s and 3.03 m/s for Limbe and 1.67 m/s and 2.0 m/s for Douala, respectively. Analysis of the wind speed and hence power distribution over the wet and dry seasons shows that in the wet season, August is the windiest month for Douala and Limbe while September is the windiest month for Kribi while in the dry season, March is the windiest month for Douala and Limbe while February is the windiest month for Kribi. In terms of mean power density, most probable wind speed and wind speed carrying maximum energy, Kribi shows to be the best site for the installation of a wind farm. Generally, the wind speeds at all three locations seem quite low, average wind speeds of all the three studied locations fall below 4.0m/s which is far below the cut-in wind speed of many modern wind turbines. However we recommend the use of low cut-in speed wind turbines like the Savonius for stand alone low energy needs.

  11. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  12. Intermittent burst of a super rogue wave in the breathing multi-soliton regime of an anomalous fiber ring cavity.

    PubMed

    Lee, Seungjong; Park, Kyoungyoon; Kim, Hyuntai; Vazquez-Zuniga, Luis Alonso; Kim, Jinseob; Jeong, Yoonchan

    2018-04-30

    We report the intermittent burst of a super rogue wave in the multi-soliton (MS) regime of an anomalous-dispersion fiber ring cavity. We exploit the spatio-temporal measurement technique to log and capture the shot-to-shot wave dynamics of various pulse events in the cavity, and obtain the corresponding intensity probability density function, which eventually unveils the inherent nature of the extreme events encompassed therein. In the breathing MS regime, a specific MS regime with heavy soliton population, the natural probability of pulse interaction among solitons and dispersive waves exponentially increases owing to the extraordinarily high soliton population density. Combination of the probabilistically started soliton interactions and subsequently accompanying dispersive waves in their vicinity triggers an avalanche of extreme events with even higher intensities, culminating to a burst of a super rogue wave nearly ten times stronger than the average solitons observed in the cavity. Without any cavity modification or control, the process naturally and intermittently recurs within a time scale in the order of ten seconds.

  13. The computer simulation of automobile use patterns for defining battery requirements for electric cars

    NASA Technical Reports Server (NTRS)

    Schwartz, H.-J.

    1976-01-01

    The modeling process of a complex system, based on the calculation and optimization of the system parameters, is complicated in that some parameters can be expressed only as probability distributions. In the present paper, a Monte Carlo technique was used to determine the daily range requirements of an electric road vehicle in the United States from probability distributions of trip lengths, frequencies, and average annual mileage data. The analysis shows that a daily range of 82 miles meets to 95% of the car-owner requirements at all times with the exception of long vacation trips. Further, it is shown that the requirement of a daily range of 82 miles can be met by a (intermediate-level) battery technology characterized by an energy density of 30 to 50 Watt-hours per pound. Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. These results imply that long-term research goals for battery systems should be focused on lower cost and longer service life, rather than on higher energy densities

  14. Density functional study for crystalline structures and electronic properties of Si1- x Sn x binary alloys

    NASA Astrophysics Data System (ADS)

    Nagae, Yuki; Kurosawa, Masashi; Shibayama, Shigehisa; Araidai, Masaaki; Sakashita, Mitsuo; Nakatsuka, Osamu; Shiraishi, Kenji; Zaima, Shigeaki

    2016-08-01

    We have carried out density functional theory (DFT) calculation for Si1- x Sn x alloy and investigated the effect of the displacement of Si and Sn atoms with strain relaxation on the lattice constant and E- k dispersion. We calculated the formation probabilities for all atomic configurations of Si1- x Sn x according to the Boltzmann distribution. The average lattice constant and E- k dispersion were weighted by the formation probability of each configuration of Si1- x Sn x . We estimated the displacement of Si and Sn atoms from the initial tetrahedral site in the Si1- x Sn x unit cell considering structural relaxation under hydrostatic pressure, and we found that the breaking of the degenerated electronic levels of the valence band edge could be caused by the breaking of the tetrahedral symmetry. We also calculated the E- k dispersion of the Si1- x Sn x alloy by the DFT+U method and found that a Sn content above 50% would be required for the indirect-direct transition.

  15. Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise

    NASA Astrophysics Data System (ADS)

    Artyushenko, V. M.; Volovach, V. I.

    2018-01-01

    We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.

  16. Statistical Short-Range Guidance for Peak Wind Speed Forecasts at Edwards Air Force Base, CA

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph; Crawford, Winifred; Lafosse, Richard; Hoeth, Brian; Burns, Kerry

    2008-01-01

    The peak winds near the surface are an important forecast element for Space Shuttle landings. As defined in the Shuttle Flight Rules (FRs), there are peak wind thresholds that cannot be exceeded in order to ensure the safety of the shuttle during landing operations. The National Weather Service Spaceflight Meteorology Group (SMG) is responsible for weather forecasts for all shuttle landings. They indicate peak winds are a challenging parameter to forecast. To alleviate the difficulty in making such wind forecasts, the Applied Meteorology Unit (AMTJ) developed a personal computer based graphical user interface (GUI) for displaying peak wind climatology and probabilities of exceeding peak-wind thresholds for the Shuttle Landing Facility (SLF) at Kennedy Space Center. However, the shuttle must land at Edwards Air Force Base (EAFB) in southern California when weather conditions at Kennedy Space Center in Florida are not acceptable, so SMG forecasters requested that a similar tool be developed for EAFB. Marshall Space Flight Center (MSFC) personnel archived and performed quality control of 2-minute average and 10-minute peak wind speeds at each tower adjacent to the main runway at EAFB from 1997- 2004. They calculated wind climatologies and probabilities of average peak wind occurrence based on the average speed. The climatologies were calculated for each tower and month, and were stratified by hour, direction, and direction/hour. For the probabilities of peak wind occurrence, MSFC calculated empirical and modeled probabilities of meeting or exceeding specific 10-minute peak wind speeds using probability density functions. The AMU obtained and reformatted the data into Microsoft Excel PivotTables, which allows users to display different values with point-click-drag techniques. The GUT was then created from the PivotTables using Visual Basic for Applications code. The GUI is run through a macro within Microsoft Excel and allows forecasters to quickly display and interpret peak wind climatology and likelihoods in a fast-paced operational environment. A summary of how the peak wind climatologies and probabilities were created and an overview of the GUT will be presented.

  17. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  18. The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Verkley, Wim; Severijns, Camiel

    2014-05-01

    Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).

  19. Predicting debris

    NASA Technical Reports Server (NTRS)

    Kessler, Donald J.

    1988-01-01

    The probable amount, sizes, and relative velocities of debris are discussed, giving examples of the damage caused by debris, and focusing on the use of mathematical models to forecast the debris environment and solar activity now and in the future. Most debris are within 2,000 km of the earth's surface. The average velocity of spacecraft-debris collisions varies from 9 km/sec at 30 degrees of inclination to 13 km/sec near polar orbits. Mathematical models predict a 5 percent per year increase in the large-fragment population, producing a small-fragment population increase of 10 percent per year until the year 2060, the time of critical density. A 10 percent increase in the large population would cause the critical density to be reached around 2025.

  20. Stochastic mechanics of loose boundary particle transport in turbulent flow

    NASA Astrophysics Data System (ADS)

    Dey, Subhasish; Ali, Sk Zeeshan

    2017-05-01

    In a turbulent wall shear flow, we explore, for the first time, the stochastic mechanics of loose boundary particle transport, having variable particle protrusions due to various cohesionless particle packing densities. The mean transport probabilities in contact and detachment modes are obtained. The mean transport probabilities in these modes as a function of Shields number (nondimensional fluid induced shear stress at the boundary) for different relative particle sizes (ratio of boundary roughness height to target particle diameter) and shear Reynolds numbers (ratio of fluid inertia to viscous damping) are presented. The transport probability in contact mode increases with an increase in Shields number attaining a peak and then decreases, while that in detachment mode increases monotonically. For the hydraulically transitional and rough flow regimes, the transport probability curves in contact mode for a given relative particle size of greater than or equal to unity attain their peaks corresponding to the averaged critical Shields numbers, from where the transport probability curves in detachment mode initiate. At an inception of particle transport, the mean probabilities in both the modes increase feebly with an increase in shear Reynolds number. Further, for a given particle size, the mean probability in contact mode increases with a decrease in critical Shields number attaining a critical value and then increases. However, the mean probability in detachment mode increases with a decrease in critical Shields number.

  1. Roosevelt elk density in old-growth forests of Olympic National Park

    USGS Publications Warehouse

    Houston, D.B.; Moorhead, Bruce B.; Olson, R.W.

    1987-01-01

    We explored the feasibility of censusing Roosevelt elk from a helicopter in the dense old growth forests of Olympic National Park. WA. Mean observed densities ranged from 8.0-11.6 elk/km2, with coefficients of variation averaging 19.9 percent. A provisional sightability factor of 74 percent suggested that actual mean densities ranged from 10.8-16.0 elk/km2. We conclude that estimates of elk density probably could be refined, but not without a cost and level of disturbance in the park that seem unwarranted at present. The effort required to conduct the 18 counts made during 1985-86 was substantial. For almost every successful count an unsuccessful attempt was made. These included aborting five flights when counting conditions turned sour. Actual counting time for the successful flights was 15.7 and 16.2 hours in 1985 and 1986, respectively. Additional flight time for traveling to and from the census zones, refueling, and aborted attempts added 12.2 and 10.8 hours for the respective years

  2. Electron beam emission from a diamond-amplifier cathode.

    PubMed

    Chang, Xiangyun; Wu, Qiong; Ben-Zvi, Ilan; Burrill, Andrew; Kewisch, Jorg; Rao, Triveni; Smedley, John; Wang, Erdong; Muller, Erik M; Busby, Richard; Dimitrov, Dimitre

    2010-10-15

    The diamond amplifier (DA) is a new device for generating high-current, high-brightness electron beams. Our transmission-mode tests show that, with single-crystal, high-purity diamonds, the peak current density is greater than 400  mA/mm², while its average density can be more than 100  mA/mm². The gain of the primary electrons easily exceeds 200, and is independent of their density within the practical range of DA applications. We observed the electron emission. The maximum emission gain measured was 40, and the bunch charge was 50  pC/0.5  mm². There was a 35% probability of the emission of an electron from the hydrogenated surface in our tests. We identified a mechanism of slow charging of the diamond due to thermal ionization of surface states that cancels the applied field within it. We also demonstrated that a hydrogenated diamond is extremely robust.

  3. Simulations of Turbulent Momentum and Scalar Transport in Non-Reacting Confined Swirling Coaxial Jets

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey; Moder, Jeffrey P.

    2015-01-01

    This paper presents the numerical simulations of confined three-dimensional coaxial water jets. The objectives are to validate the newly proposed nonlinear turbulence models of momentum and scalar transport, and to evaluate the newly introduced scalar APDF and DWFDF equation along with its Eulerian implementation in the National Combustion Code (NCC). Simulations conducted include the steady RANS, the unsteady RANS (URANS), and the time-filtered Navier-Stokes (TFNS); both without and with invoking the APDF or DWFDF equation. When the APDF (ensemble averaged probability density function) or DWFDF (density weighted filtered density function) equation is invoked, the simulations are of a hybrid nature, i.e., the transport equations of energy and species are replaced by the APDF or DWFDF equation. Results of simulations are compared with the available experimental data. Some positive impacts of the nonlinear turbulence models and the Eulerian scalar APDF and DWFDF approach are observed.

  4. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  5. Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?

    PubMed

    von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim

    2003-04-01

    New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.

  6. Bilocal current densities and mean trajectories in a Young interferometer with two Gaussian slits and two detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Withers, L. P., E-mail: lpwithers@mitre.org; Narducci, F. A., E-mail: francesco.narducci@navy.mil

    2015-06-15

    The recent single-photon double-slit experiment of Steinberg et al., based on a weak measurement method proposed by Wiseman, showed that, by encoding the photon’s transverse momentum behind the slits into its polarization state, the momentum profile can subsequently be measured on average, from a difference of the separated fringe intensities for the two circular polarization components. They then integrated the measured average velocity field, to obtain the average trajectories of the photons enroute to the detector array. In this paper, we propose a modification of their experiment, to demonstrate that the average particle velocities and trajectories change when the modemore » of detection changes. The proposed experiment replaces a single detector by a pair of detectors with a given spacing between them. The pair of detectors is configured so that it is impossible to distinguish which detector received the particle. The pair of detectors is then analogous to the simple pair of slits, in that it is impossible to distinguish which slit the particle passed through. To establish the paradoxical outcome of the modified experiment, the theory and explicit three-dimensional formulas are developed for the bilocal probability and current densities, and for the average velocity field and trajectories as the particle wavefunction propagates in the volume of space behind the Gaussian slits. Examples of these predicted results are plotted. Implementation details of the proposed experiment are discussed.« less

  7. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude ({sigma}{sub l}(less-or-similar sign)1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which aremore » physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S{sub 3}, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated. (c) 2000 The American Astronomical Society.« less

  8. How many tigers Panthera tigris are there in Huai Kha Khaeng Wildlife Sanctuary, Thailand? An estimate using photographic capture-recapture sampling

    USGS Publications Warehouse

    Simcharoen, S.; Pattanavibool, A.; Karanth, K.U.; Nichols, J.D.; Kumar, N.S.

    2007-01-01

    We used capture-recapture analyses to estimate the density of a tiger Panthera tigris population in the tropical forests of Huai Kha Khaeng Wildlife Sanctuary, Thailand, from photographic capture histories of 15 distinct individuals. The closure test results (z = 0.39, P = 0.65) provided some evidence in support of the demographic closure assumption. Fit of eight plausible closed models to the data indicated more support for model Mh, which incorporates individual heterogeneity in capture probabilities. This model generated an average capture probability $\\hat p$ = 0.42 and an abundance estimate of $\\widehat{N}(\\widehat{SE}[\\widehat{N}])$ = 19 (9.65) tigers. The sampled area of $\\widehat{A}(W)(\\widehat{SE}[\\widehat{A}(W)])$ = 477.2 (58.24) km2 yielded a density estimate of $\\widehat{D}(\\widehat{SE}[\\widehat{D}])$ = 3.98 (0.51) tigers per 100 km2. Huai Kha Khaeng Wildlife Sanctuary could therefore hold 113 tigers and the entire Western Forest Complex c. 720 tigers. Although based on field protocols that constrained us to use sub-optimal analyses, this estimated tiger density is comparable to tiger densities in Indian reserves that support moderate prey abundances. However, tiger densities in well-protected Indian reserves with high prey abundances are three times higher. If given adequate protection we believe that the Western Forest Complex of Thailand could potentially harbour >2,000 wild tigers, highlighting its importance for global tiger conservation. The monitoring approaches we recommend here would be useful for managing this tiger population.

  9. Exposure time of oral rabies vaccine baits relative to baiting density and raccoon population density.

    PubMed

    Blackwell, Bradley F; Seamans, Thomas W; White, Randolph J; Patton, Zachary J; Bush, Rachel M; Cepek, Jonathan D

    2004-04-01

    Oral rabies vaccination (ORV) baiting programs for control of raccoon (Procyon lotor) rabies in the USA have been conducted or are in progress in eight states east of the Mississippi River. However, data specific to the relationship between raccoon population density and the minimum density of baits necessary to significantly elevate rabies immunity are few. We used the 22-km2 US National Aeronautics and Space Administration Plum Brook Station (PBS) in Erie County, Ohio, USA, to evaluate the period of exposure for placebo vaccine baits placed at a density of 75 baits/km2 relative to raccoon population density. Our objectives were to 1) estimate raccoon population density within the fragmented forest, old-field, and industrial landscape at PBS: and 2) quantify the time that placebo, Merial RABORAL V-RG vaccine baits were available to raccoons. From August through November 2002 we surveyed raccoon use of PBS along 19.3 km of paved-road transects by using a forward-looking infrared camera mounted inside a vehicle. We used Distance 3.5 software to calculate a probability of detection function by which we estimated raccoon population density from transect data. Estimated population density on PBS decreased from August (33.4 raccoons/km2) through November (13.6 raccoons/km2), yielding a monthly mean of 24.5 raccoons/km2. We also quantified exposure time for ORV baits placed by hand on five 1-km2 grids on PBS from September through October. An average 82.7% (SD = 4.6) of baits were removed within 1 wk of placement. Given raccoon population density, estimates of bait removal and sachet condition, and assuming 22.9% nontarget take, the baiting density of 75/ km2 yielded an average of 3.3 baits consumed per raccoon and the sachet perforated.

  10. The role of the density gradient on intermittent cross-field transport events in a simple magnetized toroidal plasma

    NASA Astrophysics Data System (ADS)

    Theiler, C.; Diallo, A.; Fasoli, A.; Furno, I.; Labit, B.; Podestà, M.; Poli, F. M.; Ricci, P.

    2008-04-01

    Intermittent cross-field particle transport events (ITEs) are studied in the basic toroidal device TORPEX [TORoidal Plasma EXperiment, A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], with focus on the role of the density gradient. ITEs are due to the intermittent radial elongation of an interchange mode. The elongating positive wave crests can break apart and form blobs. This is not necessary, however, for plasma particles to be convected a considerable distance across the magnetic field lines. Conditionally sampled data reveal two different scenarios leading to ITEs. In the first case, the interchange mode grows radially from a slab-like density profile and leads to the ITE. A novel analysis technique reveals a monotonic dependence between the vertically averaged inverse radial density scale length and the probability for a subsequent ITE. In the second case, the mode is already observed before the start of the ITE. It does not elongate radially in a first stage, but at a later time. It is shown that this elongation is preceded by a steepening of the density profile as well.

  11. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  12. Invasion resistance arises in strongly interacting species-rich model competition communities.

    PubMed Central

    Case, T J

    1990-01-01

    I assemble stable multispecies Lotka-Volterra competition communities that differ in resident species number and average strength (and variance) of species interactions. These are then invaded with randomly constructed invaders drawn from the same distribution as the residents. The invasion success rate and the fate of the residents are determined as a function of community-and species-level properties. I show that the probability of colonization success for an invader decreases with community size and the average strength of competition (alpha). Communities composed of many strongly interacting species limit the invasion possibilities of most similar species. These communities, even for a superior invading competitor, set up a sort of "activation barrier" that repels invaders when they invade at low numbers. This "priority effect" for residents is not assumed a priori in my description for the individual population dynamics of these species; rather it emerges because species-rich and strongly interacting species sets have alternative stable states that tend to disfavor species at low densities. These models point to community-level rather than invader-level properties as the strongest determinant of differences in invasion success. The probability of extinction for a resident species increases with community size, and the probability of successful colonization by the invader decreases. Thus an equilibrium community size results wherein the probability of a resident species' extinction just balances the probability of an invader's addition. Given the distribution of alpha it is now possible to predict the equilibrium species number. The results provide a logical framework for an island-biogeographic theory in which species turnover is low even in the face of persistent invasions and for the protection of fragile native species from invading exotics. PMID:11607132

  13. Shear coaxial injector atomization phenomena for combusting and non-combusting conditions

    NASA Technical Reports Server (NTRS)

    Pal, S.; Moser, M. D.; Ryan, H. M.; Foust, M. J.; Santoro, R. J.

    1992-01-01

    Measurements of LOX drop size and velocity in a uni-element liquid propellant rocket chamber are presented. The use of the Phase Doppler Particle Analyzer in obtaining temporally-averaged probability density functions of drop size in a harsh rocket environment has been demonstrated. Complementary measurements of drop size/velocity for simulants under cold flow conditions are also presented. The drop size/velocity measurements made for combusting and cold flow conditions are compared, and the results indicate that there are significant differences in the two flowfields.

  14. Crowd evacuation model based on bacterial foraging algorithm

    NASA Astrophysics Data System (ADS)

    Shibiao, Mu; Zhijun, Chen

    To understand crowd evacuation, a model based on a bacterial foraging algorithm (BFA) is proposed in this paper. Considering dynamic and static factors, the probability of pedestrian movement is established using cellular automata. In addition, given walking and queue times, a target optimization function is built. At the same time, a BFA is used to optimize the objective function. Finally, through real and simulation experiments, the relationship between the parameters of evacuation time, exit width, pedestrian density, and average evacuation speed is analyzed. The results show that the model can effectively describe a real evacuation.

  15. Stochastic ontogenetic growth model

    NASA Astrophysics Data System (ADS)

    West, B. J.; West, D.

    2012-02-01

    An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.

  16. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  17. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  18. The Renner effect in triatomic molecules with application to CH+, MgNC and NH2.

    PubMed

    Jensen, Per; Odaka, Tina Erica; Kraemer, W P; Hirano, Tsuneo; Bunker, P R

    2002-03-01

    We have developed a computational procedure, based on the variational method, for the calculation of the rovibronic energies of a triatomic molecule in an electronic state that become degenerate at the linear nuclear configuration. In such an electronic state the coupling caused by the electronic orbital angular momentum is very significant and it is called the Renner effect. We include it, and the effect of spin-orbit coupling, in our program. We have developed the procedure to the point where spectral line intensities can be calculated so that absorption and emission spectra can be simulated. In order to gain insight into the nature of the eigenfunctions, we have introduced and calculated the overall bending probability density function f(p) of the states. By projecting the eigenfunctions onto the Born-Oppenheimer basis, we have determined the probability density functions f+(rho) and f-(rho) associated with the individual Born-Oppenheimer states phi(-)elec and phi(+)elec. At a given temperature the Boltzmann averaged value of the f(p) over all the eigenstates gives the bending probability distribution function F(rho), and this can be related to the result of a Coulomb Explosion Imaging (CEI) experiment. We review our work and apply it to the molecules CH2+, MgNC and NH2, all of which are of astrophysical interest.

  19. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  20. Exploring the uncertainty in attributing sediment contributions in fingerprinting studies due to uncertainty in determining element concentrations in source areas.

    NASA Astrophysics Data System (ADS)

    Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David

    2016-04-01

    One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual catchment was constructed, composed by three sub-catchments each of 500 x 500 m size. We assumed that there was no selectivity in sediment detachment or transport. A numerical excercise was performed considering these variables: 1) variability of element concentration: three levels with CVs of 20 %, 50 % and 80 %; 2) sampling density: 10, 25 and 50 "samples" per sub-catchment and element; and 3) number of elements included in the mixing model: two (determined), and five (overdetermined). This resulted in a total of 18 (3 x 3 x 2) possible combinations. The five fingerprinting elements considered in the study were: C, N, 40K, Al and Pavail, and their average values, taken from the literature, were: sub-catchment 1: 4.0 %, 0.35 %, 0.50 ppm, 5.0 ppm, 1.42 ppm, respectively; sub-catchment 2: 2.0 %, 0.18 %, 0.20 ppm, 10.0 ppm, 0.20 ppm, respectively; and sub-catchment 3: 1.0 %, 0.06 %, 1.0 ppm, 16.0 ppm, 7.8 ppm, respectively. For each sub-catchment, three maps of the spatial distribution of each element was generated using the random generator of Mejia and Rodriguez-Iturbe (1974) as described in Freeze (1980), using the average value and the three different CVs defined above. Each map for each source area and property was generated for a 100 x 100 square grid, each grid cell being 5 m x 5 m. Maps were randomly generated for each property and source area. In doing so, we did not consider the possibility of cross correlation among properties. Spatial autocorrelation was assumed to be weak. The reason for generating the maps was to create a "virtual" situation where all the element concentration values at each point are known. Simultaneously, we arbitrarily determined the percentage of sediment coming from sub-catchments. These values were 30 %, 10 % and 60 %, for sub-catchments 1, 2 and 3, respectively. Using these values, we determined the element concentrations in the sediment. The exercise consisted of creating different sampling strategies in a virtual environment to determine an average value for each of the different maps of element concentration and sub-catchment, under different sampling densities: 200 different average values for the "high" sampling density (average of 50 samples); 400 different average values for the "medium" sampling density (average of 25 samples); and 1,000 different average values for the "low" sampling density (average of 10 samples). All these combinations of possible values of element concentrations in the source areas were solved for the concentration in the sediment already determined for the "true" solution using limSolve (Soetaert et al., 2014) in R language. The sediment source solutions found for the different situations and values were analyzed in order to: 1) evaluate the uncertainty in the sediment source attribution; and 2) explore strategies to detect the most probable solutions that might lead to improved methods for constructing the most robust mixing models. Preliminary results on these will be presented and discussed in this communication. Key words: sediment, fingerprinting, uncertainty, variability, mixing model. References Collins, A.L., Zhang, Y., McChesney, D., Walling, D.E., Haley, S.M., Smith, P. 2012. Sediment source tracing in a lowland agricultural catchment in southern England using a modified procedure combining statistical analysis and numerical modelling. Science of the Total Environment 414: 301-317. Freeze, R.A. 1980. A stochastic-conceptual analysis of rainfall-runoff processes on a hillslope. Water Resources Research 16: 391-408.

  1. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  2. Magnetoreresistance of carbon nanotube-polypyrrole composite yarns

    NASA Astrophysics Data System (ADS)

    Ghanbari, R.; Ghorbani, S. R.; Arabi, H.; Foroughi, J.

    2018-05-01

    Three types of samples, carbon nanotube yarn and carbon nanotube-polypyrrole composite yarns had been investigated by measurement of the electrical conductivity as a function of temperature and magnetic field. The conductivity was well explained by 3D Mott variable range hopping (VRH) law at T < 100 K. Both positive and negative magnetoresistance (MR) were observed by increasing magnetic field. The MR data were analyzed based a theoretical model. A quadratic positive and negative MR was observed for three samples. It was found that the localization length decreases with applied magnetic field while the density of states increases. The increasing of the density of states induces increasing the number of available energy states for hopping. Thus the electron hopping probability increases in between sites with the shorter distance that results to small the average hopping length.

  3. Influence of turbulent fluctuations on non-equilibrium chemical reactions in the flow

    NASA Astrophysics Data System (ADS)

    Molchanov, A. M.; Yanyshev, D. S.; Bykov, L. V.

    2017-11-01

    In chemically nonequilibrium flows the problem of calculation of sources (formation rates) in equations for chemical species is of utter importance. Formation rate of each component is a non-linear function of mixture density, temperature and concentration of species. Thus the suggestion that the mean rate may be determined via mean values of the flow parameters could lead to significant errors. One of the most accurate approaches here is utilization of probability density function (PDF). In this paper the method for constructing such PDFs is developed. The developed model was verified by comparison with the experimental data. On the example of supersonic combustion it was shown that while the overall effect on the averaged flow field is often negligible, the point of ignition can be considerably shifted up the flow.

  4. A GRASS GIS Semi-Stochastic Model for Evaluating the Probability of Landslides Impacting Road Networks in Collazzone, Central Italy

    NASA Astrophysics Data System (ADS)

    Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.

    2013-04-01

    During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages ranges from 6 to 17, resulting in one road blockage every 24-67 km of roads. The average length of road blocked was 33 m. As we progress with model development and more sophisticated network analysis, we believe this semi-stochastic modelling approach will aid civil protection agencies to get a rough idea for the probability of road network potential damage (road block number and extent) as the result of different magnitude landslide triggering event scenarios.

  5. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  6. Statistical properties of business firms structure and growth

    NASA Astrophysics Data System (ADS)

    Matia, K.; Fu, Dongfeng; Buldyrev, S. V.; Pammolli, F.; Riccaboni, M.; Stanley, H. E.

    2004-08-01

    We analyze a database comprising quarterly sales of 55624 pharmaceutical products commercialized by 3939 pharmaceutical firms in the period 1992 2001. We study the probability density function (PDF) of growth in firms and product sales and find that the width of the PDF of growth decays with the sales as a power law with exponent β = 0.20 ± 0.01. We also find that the average sales of products scales with the firm sales as a power law with exponent α = 0.57 ± 0.02. And that the average number products of a firm scales with the firm sales as a power law with exponent γ = 0.42 ± 0.02. We compare these findings with the predictions of models proposed till date on growth of business firms.

  7. Wave theory of turbulence in compressible media (acoustic theory of turbulence)

    NASA Technical Reports Server (NTRS)

    Kentzer, C. P.

    1975-01-01

    The generation and the transmission of sound in turbulent flows are treated as one of the several aspects of wave propagation in turbulence. Fluid fluctuations are decomposed into orthogonal Fourier components, with five interacting modes of wave propagation: two vorticity modes, one entropy mode, and two acoustic modes. Wave interactions, governed by the inhomogeneous and nonlinear terms of the perturbed Navier-Stokes equations, are modeled by random functions which give the rates of change of wave amplitudes equal to the averaged interaction terms. The statistical framework adopted is a quantum-like formulation in terms of complex distribution functions. The spatial probability distributions are given by the squares of the absolute values of the complex characteristic functions. This formulation results in nonlinear diffusion-type transport equations for the probability densities of the five modes of wave propagation.

  8. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  9. Empirical models of the electron temperature and density in the nightside venus ionosphere.

    PubMed

    Brace, L H; Theis, R F; Niemann, H B; Mayr, H G; Hoegy, W R; Nagy, A F

    1979-07-06

    Empirical models of the electron temperature and electron density of the late afternoon and nightside Venus ionosphere have been derived from Pioneer Venus measurements acquired between 10 December 1978 and 23 March 1979. The models describe the average ionosphere conditions near 18 degrees N latitude between 150 and 700 kilometers altitude for solar zenith angles of 80 degrees to 180 degrees . The average index of solar flux was 200. A major feature of the density model is the factor of 10 decrease beyond 90 degrees followed by a very gradual decrease between 120 degrees and 180 degrees . The density at 150 degrees is about five times greater than observed by Venera 9 and 10 at solar minimum (solar flux approximately 80), a difference that is probably related to the effects of increased solar activity on the processes that maintain the nightside ionosphere. The nightside electron density profile from the model (above 150 kilometers) can be reproduced theoretically either by transport of 0(+) ions from the dayside or by precipitation of low-energy electrons. The ion transport process would require a horizontal flow velocity of about 300 meters per second, a value that is consistent with other Pioneer Venus observations. Although currently available energetic electron data do not yet permit the role of precipitation to be evaluated quantitatively, this process is clearly involved to some extent in the formation of the nightside ionosphere. Perhaps the most surprising feature of the temperature model is that the electron temperature remains high throughout the nightside ionosphere. These high nocturnal temperatures and the existence of a well-defined nightside ionopause suggest that energetic processes occur across the top of the entire nightside ionosphere, maintaining elevated temperatures. A heat flux of 2 x 10(10) electron volts per square centimeter per second, introduced at the ionopause, is consistent with the average electron temperature profile on the nightside at a solar zenith angle of 140 degrees .

  10. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models.

    PubMed

    Whittington, Jesse; Sawaya, Michael A

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.

  11. The electron localization as the information content of the conditional pair density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbina, Andres S.; Torres, F. Javier; Universidad San Francisco de Quito

    2016-06-28

    In the present work, the information gained by an electron for “knowing” about the position of another electron with the same spin is calculated using the Kullback-Leibler divergence (D{sub KL}) between the same-spin conditional pair probability density and the marginal probability. D{sub KL} is proposed as an electron localization measurement, based on the observation that regions of the space with high information gain can be associated with strong correlated localized electrons. Taking into consideration the scaling of D{sub KL} with the number of σ-spin electrons of a system (N{sup σ}), the quantity χ = (N{sup σ} − 1) D{sub KL}f{submore » cut} is introduced as a general descriptor that allows the quantification of the electron localization in the space. f{sub cut} is defined such that it goes smoothly to zero for negligible densities. χ is computed for a selection of atomic and molecular systems in order to test its capability to determine the region in space where electrons are localized. As a general conclusion, χ is able to explain the electron structure of molecules on the basis of chemical grounds with a high degree of success and to produce a clear differentiation of the localization of electrons that can be traced to the fluctuation in the average number of electrons in these regions.« less

  12. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  13. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    NASA Astrophysics Data System (ADS)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  14. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  15. Rare reaction channels in real-time time-dependent density functional theory: the test case of electron attachment

    NASA Astrophysics Data System (ADS)

    Lacombe, Lionel; Dinh, P. Huong Mai; Reinhard, Paul-Gerhard; Suraud, Eric; Sanche, Leon

    2015-08-01

    We present an extension of standard time-dependent density functional theory (TDDFT) to include the evaluation of rare reaction channels, taking as an example of application the theoretical modelling of electron attachment to molecules. The latter process is of great importance in radiation-induced damage of biological tissue for which dissociative electron attachment plays a decisive role. As the attachment probability is very low, it cannot be extracted from the TDDFT propagation whose mean field provides an average over various reaction channels. To extract rare events, we augment TDDFT by a perturbative treatment to account for the occasional jumps, namely electron capture in our test case. We apply the modelling to electron attachment to H2O, H3O+, and (H2O)2. Dynamical calculations have been done at low energy (3-16 eV). We explore, in particular, how core-excited states of the targets show up as resonances in the attachment probability. Contribution to the Topical Issue "COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy", edited by Andrey Solov'yov, Nigel Mason, Gustavo García, Eugene Surdutovich.

  16. Probability density functions of power-in-bucket and power-in-fiber for an infrared laser beam propagating in the maritime environment.

    PubMed

    Nelson, Charles; Avramov-Zamurovic, Svetlana; Korotkova, Olga; Malek-Madani, Reza; Sova, Raymond; Davidson, Frederic

    2013-11-01

    Irradiance fluctuations of an infrared laser beam from a shore-to-ship data link ranging from 5.1 to 17.8 km are compared to lognormal (LN), gamma-gamma (GG) with aperture averaging, and gamma-Laguerre (GL) distributions. From our data analysis, the LN and GG probability density function (PDF) models were generally in good agreement in near-weak to moderate fluctuations. This was also true in moderate to strong fluctuations when the spatial coherence radius was smaller than the detector aperture size, with the exception of the 2.54 cm power-in-bucket (PIB) where the LN PDF model fit best. For moderate to strong fluctuations, the GG PDF model tended to outperform the LN PDF model when the spatial coherence radius was greater than the detector aperture size. Additionally, the GL PDF model had the best or next to best overall fit in all cases with the exception of the 2.54 cm PIB where the scintillation index was highest. The GL PDF model also appears to be robust for off-of-beam center laser beam applications.

  17. Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai

    To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.

  18. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  19. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  20. Microstructure and nanohardness distribution in a polycrystalline Zn deformed by high strain rate impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dirras, G., E-mail: dirras@univ-paris13.fr; Ouarem, A.; Couque, H.

    2011-05-15

    Polycrystalline Zn with an average grain size of about 300 {mu}m was deformed by direct impact Hopkinson pressure bar at a velocity of 29 m/s. An inhomogeneous grain structure was found consisting of a center region having large average grain size of 20 {mu}m surrounded by a fine-grained rim with an average grain size of 6 {mu}m. Transmission electron microscopy investigations showed a significant dislocation density in the large-grained area while in the fine-grained rim the dislocation density was negligible. Most probably, the higher strain yielded recrystallization in the outer ring while in the center only recovery occurred. The hardeningmore » effect of dislocations overwhelms the smaller grain size strengthening in the center part resulting in higher nanohardness in this region than in the outer ring. - Graphical Abstract: (a): EBSD micrograph showing the initial microstructure of polycrystalline Zn that was subsequently submitted to high strain rate impact. (b): an inhomogeneous grain size refinement was obtained which consists of a central coarse-grained area, surrounded by a fine-grained recrystallized rim. The black arrow points to the disc center. Research Highlights: {yields} A polycrystalline Zn specimen was submitted to high strain rate impact loading. {yields} Inhomogeneous grain refinement occurred due to strain gradient in impacted sample. {yields} A fine-grained recrystallized rim surrounded the coarse-grained center of specimen. {yields} The coarse-grained center exhibited higher hardness than the fine-grained rim. {yields} The higher hardness of the center was caused by the higher dislocation density.« less

  1. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  2. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  3. Density profiles of supernova matter and determination of neutrino parameters

    NASA Astrophysics Data System (ADS)

    Chiu, Shao-Hsuan

    2007-08-01

    The flavor conversion of supernova neutrinos can lead to observable signatures related to the unknown neutrino parameters. As one of the determinants in dictating the efficiency of resonant flavor conversion, the local density profile near the Mikheyev-Smirnov-Wolfenstein (MSW) resonance in a supernova environment is, however, not so well understood. In this analysis, variable power-law functions are adopted to represent the independent local density profiles near the locations of resonance. It is shown that the uncertain matter density profile in a supernova, the possible neutrino mass hierarchies, and the undetermined 1-3 mixing angle would result in six distinct scenarios in terms of the survival probabilities of νe and ν¯e. The feasibility of probing the undetermined neutrino mass hierarchy and the 1-3 mixing angle with the supernova neutrinos is then examined using several proposed experimental observables. Given the incomplete knowledge of the supernova matter profile, the analysis is further expanded to incorporate the Earth matter effect. The possible impact due to the choice of models, which differ in the average energy and in the luminosity of neutrinos, is also addressed in the analysis.

  4. Contingent association between the size of the social support network and osteoporosis among Korean elderly women

    PubMed Central

    Seo, Da Hea; Kim, Kyoung Min; Lee, Eun Young; Kim, Hyeon Chang; Kim, Chang Oh; Youm, Yoosik; Rhee, Yumie

    2017-01-01

    Objective To investigate the association between the number of personal ties (or the size of the social support network) and the incidence of osteoporosis among older women in Korea. Methods Data from the Korean Urban Rural Elderly Study were used. Bone density was measured by dual-energy X-ray absorptiometry at the lumbar spine (L1–L4) and femur neck. T-score, the standardized bone density compared with what is normally expected in a healthy young adult, was measured and the presence of osteoporosis was determined, if the T-score was < -2.5. The social support network size was measured by self-responses (number of confidants and spouse). Results Of the 1,846 participants, 44.9% were diagnosed with osteoporosis. The association between the social support network size and the incidence of osteoporosis was curvilinear in both bivariate and multivariate analyses. Having more people in one’s social support network size was associated with lower risk of osteoporosis until it reached around four. Increasing the social support network size beyond four, in contrast, was associated with a higher risk of osteoporosis. This association was contingent on the average intimacy level of the social network. At the highest average intimacy level (“extremely close”), increasing the number of social support network members from one to six was associated with linear decrease in the predicted probability of osteoporosis from 45% to 30%. However, at the lowest average intimacy level (“not very close”), the predicted probability of osteoporosis dramatically increased from 48% to 80% as the size of the social network increased from one to six. Conclusion Our results show that maintaining a large and intimate social support network is associated with a lower risk of osteoporosis among elderly Korean women, while a large but less-intimate social relationship is associated with a higher risk. PMID:28700637

  5. Contingent association between the size of the social support network and osteoporosis among Korean elderly women.

    PubMed

    Lee, Seungwon; Seo, Da Hea; Kim, Kyoung Min; Lee, Eun Young; Kim, Hyeon Chang; Kim, Chang Oh; Youm, Yoosik; Rhee, Yumie

    2017-01-01

    To investigate the association between the number of personal ties (or the size of the social support network) and the incidence of osteoporosis among older women in Korea. Data from the Korean Urban Rural Elderly Study were used. Bone density was measured by dual-energy X-ray absorptiometry at the lumbar spine (L1-L4) and femur neck. T-score, the standardized bone density compared with what is normally expected in a healthy young adult, was measured and the presence of osteoporosis was determined, if the T-score was < -2.5. The social support network size was measured by self-responses (number of confidants and spouse). Of the 1,846 participants, 44.9% were diagnosed with osteoporosis. The association between the social support network size and the incidence of osteoporosis was curvilinear in both bivariate and multivariate analyses. Having more people in one's social support network size was associated with lower risk of osteoporosis until it reached around four. Increasing the social support network size beyond four, in contrast, was associated with a higher risk of osteoporosis. This association was contingent on the average intimacy level of the social network. At the highest average intimacy level ("extremely close"), increasing the number of social support network members from one to six was associated with linear decrease in the predicted probability of osteoporosis from 45% to 30%. However, at the lowest average intimacy level ("not very close"), the predicted probability of osteoporosis dramatically increased from 48% to 80% as the size of the social network increased from one to six. Our results show that maintaining a large and intimate social support network is associated with a lower risk of osteoporosis among elderly Korean women, while a large but less-intimate social relationship is associated with a higher risk.

  6. Beaver herbivory and its effect on cottonwood trees: Influence of flooding along matched regulated and unregulated rivers

    USGS Publications Warehouse

    Breck, S.W.; Wilson, K.R.; Andersen, D.C.

    2003-01-01

    We compared beaver (Castor canadensis) foraging patterns on Fremont cottonwood (Populus deltoides subsp. wislizenii) saplings and the probability of saplings being cut on a 10 km reach of the flow-regulated Green River and a 8.6 km reach of the free-flowing Yampa River in northwestern Colorado. We measured the abundance and density of cottonwood on each reach and followed the fates of individually marked saplings in three patches of cottonwood on the Yampa River and two patches on the Green River. Two natural floods on the Yampa River and one controlled flood on the Green River between May 1998 and November 1999 allowed us to assess the effect of flooding on beaver herbivory. Independent of beaver herbivory, flow regulation on the Green River has caused a decrease in number of cottonwood patches per kilometre of river, area of patches per kilometre, and average stem density within cottonwood patches. The number of saplings cut per beaver colony was three times lower on the Green River than on the Yampa River but the probability of a sapling being cut by a beaver was still higher on the Green River because of lower sapling density there. Controlled flooding appeared to increase the rate of foraging on the Green River by inundating patches of cottonwood, which enhanced access by beaver. Our results suggest regulation can magnify the impact of beaver on cottonwood through interrelated effects on plant spatial distribution and cottonwood density, with the result that beaver herbivory will need to be considered in plans to enhance cottonwood populations along regulated rivers.

  7. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  8. Estimating neuronal connectivity from axonal and dendritic density fields

    PubMed Central

    van Pelt, Jaap; van Ooyen, Arjen

    2013-01-01

    Neurons innervate space by extending axonal and dendritic arborizations. When axons and dendrites come in close proximity of each other, synapses between neurons can be formed. Neurons vary greatly in their morphologies and synaptic connections with other neurons. The size and shape of the arborizations determine the way neurons innervate space. A neuron may therefore be characterized by the spatial distribution of its axonal and dendritic “mass.” A population mean “mass” density field of a particular neuron type can be obtained by averaging over the individual variations in neuron geometries. Connectivity in terms of candidate synaptic contacts between neurons can be determined directly on the basis of their arborizations but also indirectly on the basis of their density fields. To decide when a candidate synapse can be formed, we previously developed a criterion defining that axonal and dendritic line pieces should cross in 3D and have an orthogonal distance less than a threshold value. In this paper, we developed new methodology for applying this criterion to density fields. We show that estimates of the number of contacts between neuron pairs calculated from their density fields are fully consistent with the number of contacts calculated from the actual arborizations. However, the estimation of the connection probability and the expected number of contacts per connection cannot be calculated directly from density fields, because density fields do not carry anymore the correlative structure in the spatial distribution of synaptic contacts. Alternatively, these two connectivity measures can be estimated from the expected number of contacts by using empirical mapping functions. The neurons used for the validation studies were generated by our neuron simulator NETMORPH. An example is given of the estimation of average connectivity and Euclidean pre- and postsynaptic distance distributions in a network of neurons represented by their population mean density fields. PMID:24324430

  9. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  10. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  11. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  12. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  13. Limits on modes of lithospheric heat transport on Venus from impact crater density

    NASA Technical Reports Server (NTRS)

    Grimm, Robert E.; Solomon, Sean C.

    1987-01-01

    Based on the observed density of impact craters on the Venus surface obtained from Venera 15-16 radar images, a formalism to estimate the upper bounds on the contributions made to lithospheric heat transport by volcanism and lithospheric recycling is presented. The Venera 15-16 data, if representative of the entire planet, limit the average rate of volcanic resurfacing on Venus to less than 2 cu km/yr (corresponding to less than 1 percent of the global heat loss), and limit the rate of lithospheric recycling to less than 1.5 sq km/yr (and probably to less than 0.5 sq km/yr), corresponding to 25 percent (and to 9 percent) of the global heat loss. The present results indicate that heat loss at lithospheric levels in Venus is dominated by conduction.

  14. How directional mobility affects coexistence in rock-paper-scissors models

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.; Santos, M. A.

    2018-03-01

    This work deals with a system of three distinct species that changes in time under the presence of mobility, selection, and reproduction, as in the popular rock-paper-scissors game. The novelty of the current study is the modification of the mobility rule to the case of directional mobility, in which the species move following the direction associated to a larger (averaged) number density of selection targets in the surrounding neighborhood. Directional mobility can be used to simulate eyes that see or a nose that smells, and we show how it may contribute to reduce the probability of coexistence.

  15. How directional mobility affects coexistence in rock-paper-scissors models.

    PubMed

    Avelino, P P; Bazeia, D; Losano, L; Menezes, J; de Oliveira, B F; Santos, M A

    2018-03-01

    This work deals with a system of three distinct species that changes in time under the presence of mobility, selection, and reproduction, as in the popular rock-paper-scissors game. The novelty of the current study is the modification of the mobility rule to the case of directional mobility, in which the species move following the direction associated to a larger (averaged) number density of selection targets in the surrounding neighborhood. Directional mobility can be used to simulate eyes that see or a nose that smells, and we show how it may contribute to reduce the probability of coexistence.

  16. Scale matters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L. G.

    The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.

  17. Scale matters

    DOE PAGES

    Margolin, L. G.

    2018-03-19

    The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.

  18. Mesoscopic fluctuations and intermittency in aging dynamics

    NASA Astrophysics Data System (ADS)

    Sibani, P.

    2006-01-01

    Mesoscopic aging systems are characterized by large intermittent noise fluctuations. In a record dynamics scenario (Sibani P. and Dall J., Europhys. Lett., 64 (2003) 8) these events, quakes, are treated as a Poisson process with average αln (1 + t/tw), where t is the observation time, tw is the age and α is a parameter. Assuming for simplicity that quakes constitute the only source of de-correlation, we present a model for the probability density function (PDF) of the configuration autocorrelation function. Beside α, the model has the average quake size 1/q as a parameter. The model autocorrelation PDF has a Gumbel-like shape, which approaches a Gaussian for large t/tw and becomes sharply peaked in the thermodynamic limit. Its average and variance, which are given analytically, depend on t/tw as a power law and a power law with a logarithmic correction, respectively. Most predictions are in good agreement with data from the literature and with the simulations of the Edwards-Anderson spin-glass carried out as a test.

  19. Nesting success of Northern Pintails on the coastal Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Flint, Paul L.; Grand, James B.

    1996-01-01

    We studied nesting chronology and success of Northern Pintails (Anas acuta) on the coastal Yukon-Kuskokwim Delta, Alaska during the summers of 1991-1993. We found a total of 795 nests during three annual searches of a 27.4 km2 area. Minimum nest density averaged 9.67 nests per km2. Nesting success varied among years and ranged from 43.12% in 1991 to 10.74% in 1993 (average 23.95%). Most nest loss was the result of predation and tidal flooding. Daily nest survival probability declined with nest initiation date in all three years and also varied with nest age in 1992. Clutch size averaged 7.63 ± 0.067 (SE) eggs per nest and was larger than reported for other populations of Northern Pintails. Clutch size declined during the 44-47 day nesting interval at a greater rate than reported for other populations of Northern Pintails. We conclude that sub-arctic and prairie nesting Northern Pintails have similar reproductive potentials.

  20. A Two-Piece Microkeratome-Assisted Mushroom Keratoplasty Improves the Outcomes and Survival of Grafts Performed in Eyes with Diseased Stroma and Healthy Endothelium (An American Ophthalmological Society Thesis)

    PubMed Central

    Busin, Massimo; Madi, Silvana; Scorcia, Vincenzo; Santorum, Paolo; Nahum, Yoav

    2015-01-01

    Purpose: To test the hypothesis that a new microkeratome-assisted penetrating keratoplasty (PK) technique employing transplantation of a two-piece mushroom-shaped graft may result in better visual outcomes and graft survival rates than those of conventional PK. Methods: Retrospective chart review of 96 eyes at low risk and 76 eyes at high risk for immunologic rejection (all with full-thickness central corneal opacity and otherwise healthy endothelium) undergoing mushroom PK between 2004 and 2012 at our Institution. Outcome measures were best-corrected visual acuity (BCVA), refraction, corneal topography, endothelial cell density, graft rejection, and survival probability. Results: Five years postoperatively, BCVA of 20/40 and 20/20 was recorded in 100% and over 50% of eyes, respectively. Mean spherical equivalent of refractive error did not vary significantly over a 5-year period; astigmatism averaged always below 4 diopters, with no statistically significant change over time, and was of the regular type in over 90% of eyes. Endothelial cell density decreased to about 40% of the eye bank count 2 years after mushroom PK and did not change significantly thereafter. Five years postoperatively, probabilities of graft immunologic rejection and graft survival were below 5% and above 95%, respectively. There was no statistically significant difference in endothelial cell loss, graft rejection, and survival probability between low-risk and high-risk subgroups. Conclusions: Refractive and visual outcomes of mushroom PK compare favorably with those of conventional full-thickness keratoplasty. In eyes at high risk for immunologic rejection, mushroom PK provides a considerably higher probability of graft survival than conventional PK. PMID:26538771

  1. Predicting Atomic Decay Rates Using an Informational-Entropic Approach

    NASA Astrophysics Data System (ADS)

    Gleiser, Marcelo; Jiang, Nan

    2018-06-01

    We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.

  2. Predicting Atomic Decay Rates Using an Informational-Entropic Approach

    NASA Astrophysics Data System (ADS)

    Gleiser, Marcelo; Jiang, Nan

    2018-02-01

    We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.

  3. Extreme values and fat tails of multifractal fluctuations

    NASA Astrophysics Data System (ADS)

    Muzy, J. F.; Bacry, E.; Kozhemyak, A.

    2006-06-01

    In this paper we discuss the problem of the estimation of extreme event occurrence probability for data drawn from some multifractal process. We also study the heavy (power-law) tail behavior of probability density function associated with such data. We show that because of strong correlations, the standard extreme value approach is not valid and classical tail exponent estimators should be interpreted cautiously. Extreme statistics associated with multifractal random processes turn out to be characterized by non-self-averaging properties. Our considerations rely upon some analogy between random multiplicative cascades and the physics of disordered systems and also on recent mathematical results about the so-called multifractal formalism. Applied to financial time series, our findings allow us to propose an unified framework that accounts for the observed multiscaling properties of return fluctuations, the volatility clustering phenomenon and the observed “inverse cubic law” of the return pdf tails.

  4. Multivariate η-μ fading distribution with arbitrary correlation model

    NASA Astrophysics Data System (ADS)

    Ghareeb, Ibrahim; Atiani, Amani

    2018-03-01

    An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.

  5. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  6. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  7. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  8. Critical spreading dynamics of parity conserving annihilating random walks with power-law branching

    NASA Astrophysics Data System (ADS)

    Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.

    2018-09-01

    We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.

  9. White-tailed deer (Odocoileus virginianus) subsidize gray wolves (Canis lupus) during a moose (Alces americanus) decline: A case of apparent competition?

    USGS Publications Warehouse

    Barber-Meyer, Shannon; Mech, L. David

    2016-01-01

    Moose (Alces americanus) in northeastern Minnesota have declined by 55% since 2006. Although the cause is unresolved, some studies have suggested that Gray Wolves (Canis lupus) contributed to the decline. After the Moose decline, wolves could either decline or switch prey. To determine which occurred in our study area, we compared winter wolf counts and summer diet before and after the Moose decline. While wolf numbers in our study area nearly doubled from 23 in winter 2002 to an average of 41 during winters 2011–2013, calf:cow ratios (the number of calves per cow observed during winter surveys) in the wider Moose range more than halved from 0.93 in 2002 to an average of 0.31 during 2011–2013. Compared to summer 2002, wolves in summers 2011–2013 consumed fewer Moose and more White-tailed Deer (Odocoileus virginianus). While deer densities were similar during each period, average vulnerability, as reflected by winter severity, was greater during 2011–2013 than 2002, probably explaining the wolf increase. During the wolf increase Moose calves remained a summer food item. These findings suggest that in part of the Moose range, deer subsidized wolf numbers while wolves also preyed on Moose calves. This contributed to a Moose decline and is a possible case of apparent competition and inverse-density-dependent predation.

  10. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  11. Turbulent fluctuations during pellet injection into a dipole confined plasma torus

    NASA Astrophysics Data System (ADS)

    Garnier, D. T.; Mauel, M. E.; Roberts, T. M.; Kesner, J.; Woskov, P. P.

    2017-01-01

    We report measurements of the turbulent evolution of the plasma density profile following the fast injection of lithium pellets into the Levitated Dipole Experiment (LDX) [Boxer et al., Nat. Phys. 6, 207 (2010)]. As the pellet passes through the plasma, it provides a significant internal particle source and allows investigation of density profile evolution, turbulent relaxation, and turbulent fluctuations. The total electron number within the dipole plasma torus increases by more than a factor of three, and the central density increases by more than a factor of five. During these large changes in density, the shape of the density profile is nearly "stationary" such that the gradient of the particle number within tubes of equal magnetic flux vanishes. In comparison to the usual case, when the particle source is neutral gas at the plasma edge, the internal source from the pellet causes the toroidal phase velocity of the fluctuations to reverse and changes the average particle flux at the plasma edge. An edge particle source creates an inward turbulent pinch, but an internal particle source increases the outward turbulent particle flux. Statistical properties of the turbulence are measured by multiple microwave interferometers and by an array of probes at the edge. The spatial structures of the largest amplitude modes have long radial and toroidal wavelengths. Estimates of the local and toroidally averaged turbulent particle flux show intermittency and a non-Gaussian probability distribution function. The measured fluctuations, both before and during pellet injection, have frequency and wavenumber dispersion consistent with theoretical expectations for interchange and entropy modes excited within a dipole plasma torus having warm electrons and cool ions.

  12. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  13. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  14. Gravitational lensing by an ensemble of isothermal galaxies

    NASA Technical Reports Server (NTRS)

    Katz, Neal; Paczynski, Bohdan

    1987-01-01

    Calculation of 28,000 models of gravitational lensing of a distant quasar by an ensemble of randomly placed galaxies, each having a singular isothermal mass distribuiton, is reported. The average surface mass density was 0.2 of the critical value in all models. It is found that the surface mass density averaged over the area of the smallest circle that encompasses the multiple images is 0.82, only slightly smaller than expected from a simple analytical model of Turner et al. (1984). The probability of getting multiple images is also as large as expected analytically. Gravitational lensing is dominated by the matter in the beam; i.e., by the beam convergence. The cases where the multiple imaging is due to asymmetry in mass distribution (i.e., due to shear) are very rare. Therefore, the observed gravitational-lens candidates for which no lensing object has been detected between the images cannot be a result of asymmetric mass distribution outside the images, at least in a model with randomly distributed galaxies. A surprisingly large number of large separations between the multiple images is found: up to 25 percent of multiple images have their angular separation 2 to 4 times larger than expected in a simple analytical model.

  15. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  16. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network

    PubMed Central

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  17. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  18. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  19. Combining Breeding Bird Survey and distance sampling to estimate density of migrant and breeding birds

    USGS Publications Warehouse

    Somershoe, S.G.; Twedt, D.J.; Reid, B.

    2006-01-01

    We combined Breeding Bird Survey point count protocol and distance sampling to survey spring migrant and breeding birds in Vicksburg National Military Park on 33 days between March and June of 2003 and 2004. For 26 of 106 detected species, we used program DISTANCE to estimate detection probabilities and densities from 660 3-min point counts in which detections were recorded within four distance annuli. For most species, estimates of detection probability, and thereby density estimates, were improved through incorporation of the proportion of forest cover at point count locations as a covariate. Our results suggest Breeding Bird Surveys would benefit from the use of distance sampling and a quantitative characterization of habitat at point count locations. During spring migration, we estimated that the most common migrant species accounted for a population of 5000-9000 birds in Vicksburg National Military Park (636 ha). Species with average populations of 300 individuals during migration were: Blue-gray Gnatcatcher (Polioptila caerulea), Cedar Waxwing (Bombycilla cedrorum), White-eyed Vireo (Vireo griseus), Indigo Bunting (Passerina cyanea), and Ruby-crowned Kinglet (Regulus calendula). Of 56 species that bred in Vicksburg National Military Park, we estimated that the most common 18 species accounted for 8150 individuals. The six most abundant breeding species, Blue-gray Gnatcatcher, White-eyed Vireo, Summer Tanager (Piranga rubra), Northern Cardinal (Cardinalis cardinalis), Carolina Wren (Thryothorus ludovicianus), and Brown-headed Cowbird (Molothrus ater), accounted for 5800 individuals.

  20. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  1. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  2. A computational model for biosonar echoes from foliage

    PubMed Central

    Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631

  3. A computational model for biosonar echoes from foliage.

    PubMed

    Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.

  4. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garnier, D. T.; Mauel, M. E.; Roberts, T. M.

    Here, we report measurements of the turbulent evolution of the plasma density profile following the fast injection of lithium pellets into the Levitated Dipole Experiment (LDX) [Boxer et al., Nat. Phys. 6, 207 (2010)]. As the pellet passes through the plasma, it provides a significant internal particle source and allows investigation of density profile evolution, turbulent relaxation, and turbulent fluctuations. The total electron number within the dipole plasma torus increases by more than a factor of three, and the central density increases by more than a factor of five. During these large changes in density, the shape of the densitymore » profile is nearly “stationary” such that the gradient of the particle number within tubes of equal magnetic flux vanishes. In comparison to the usual case, when the particle source is neutral gas at the plasma edge, the internal source from the pellet causes the toroidal phase velocity of the fluctuations to reverse and changes the average particle flux at the plasma edge. An edge particle source creates an inward turbulent pinch, but an internal particle source increases the outward turbulent particle flux. Statistical properties of the turbulence are measured by multiple microwave interferometers and by an array of probes at the edge. The spatial structures of the largest amplitude modes have long radial and toroidal wavelengths. Estimates of the local and toroidally averaged turbulent particle flux show intermittency and a non-Gaussian probability distribution function. The measured fluctuations, both before and during pellet injection, have frequency and wave number dispersion consistent with theoretical expectations for interchange and entropy modes excited within a dipole plasma torus having warm electrons and cool ions.« less

  6. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  7. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  8. The ratio of N(C18O) and AV in Chamaeleon I and III-B. Using 2MASS and SEST

    NASA Astrophysics Data System (ADS)

    Kainulainen, J.; Lehtinen, K.; Harju, J.

    2006-02-01

    We investigate the relationship between the C18O column density and the visual extinction in Chamaeleon I and in a part of the Chamaeleon III molecular cloud. The C18O column densities, N(C18O), are calculated from J=1{-}0 rotational line data observed with the SEST telescope. The visual extinctions, A_V, are derived using {JHK} photometry from the 2MASS survey and the NICER color excess technique. In contrast with the previous results of Hayakawa et al. (2001, PASJ, 53, 1109), we find that the average N(C18O)/AV ratios are similar in Cha I and Cha III, and lie close to values derived for other clouds, i.e. N(C18O) ≈ 2 × 1014 cm-2 ( AV - 2 ). We find, however, clear deviations from this average relationship towards individual clumps. Larger than average N(C18O)/AV ratios can be found in clumps associated with the active star forming region in the northern part of Cha I. On the other hand, some regions in the relatively quiescent southern part of Cha I show smaller than average N(C18O)/AV ratios and also very shallow proportionality between N(C18O) and A_V. The shallow proportionality suggests that C18O is heavily depleted in these regions. As the degree of depletion is proportional to the gas density, these regions probably contain very dense, cold cores, which do not stand out in CO mappings. A comparison with the dust temperature map derived from the ISO data shows that the most prominent of the potentially depleted cores indeed coincides with a dust temperature minimum. It seems therefore feasible to use N(C18O) and AV data together for identifying cold, dense cores in large scale mappings.

  9. Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity

    NASA Astrophysics Data System (ADS)

    Esler, J. G.

    2017-01-01

    The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.

  10. Average probability that a "cold hit" in a DNA database search results in an erroneous attribution.

    PubMed

    Song, Yun S; Patil, Anand; Murphy, Erin E; Slatkin, Montgomery

    2009-01-01

    We consider a hypothetical series of cases in which the DNA profile of a crime-scene sample is found to match a known profile in a DNA database (i.e., a "cold hit"), resulting in the identification of a suspect based only on genetic evidence. We show that the average probability that there is another person in the population whose profile matches the crime-scene sample but who is not in the database is approximately 2(N - d)p(A), where N is the number of individuals in the population, d is the number of profiles in the database, and p(A) is the average match probability (AMP) for the population. The AMP is estimated by computing the average of the probabilities that two individuals in the population have the same profile. We show further that if a priori each individual in the population is equally likely to have left the crime-scene sample, then the average probability that the database search attributes the crime-scene sample to a wrong person is (N - d)p(A).

  11. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  12. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  13. Continuous description of fluctuating eccentricities

    NASA Astrophysics Data System (ADS)

    Blaizot, Jean-Paul; Broniowski, Wojciech; Ollitrault, Jean-Yves

    2014-11-01

    We consider the initial energy density in the transverse plane of a high energy nucleus-nucleus collision as a random field ρ (x), whose probability distribution P [ ρ ], the only ingredient of the present description, encodes all possible sources of fluctuations. We argue that it is a local Gaussian, with a short-range 2-point function, and that the fluctuations relevant for the calculation of the eccentricities that drive the anisotropic flow have small relative amplitudes. In fact, this 2-point function, together with the average density, contains all the information needed to calculate the eccentricities and their variances, and we derive general model independent expressions for these quantities. The short wavelength fluctuations are shown to play no role in these calculations, except for a renormalization of the short range part of the 2-point function. As an illustration, we compare to a commonly used model of independent sources, and recover the known results of this model.

  14. Modification of Lightweight Aggregates' Microstructure by Used Motor Oil Addition.

    PubMed

    Franus, Małgorzata; Jozefaciuk, Grzegorz; Bandura, Lidia; Lamorski, Krzysztof; Hajnos, Mieczysław; Franus, Wojciech

    2016-10-18

    An admixture of lightweight aggregate substrates (beidellitic clay containing 10 wt % of natural clinoptilolite or Na-P1 zeolite) with used motor oil (1 wt %-8 wt %) caused marked changes in the aggregates' microstructure, measured by a combination of mercury porosimetry (MIP), microtomography (MT), and scanning electron microscopy. Maximum porosity was produced at low (1%-2%) oil concentrations and it dropped at higher concentrations, opposite to the aggregates' bulk density. Average pore radii, measured by MIP, decreased with an increasing oil concentration, whereas larger (MT) pore sizes tended to increase. Fractal dimension, derived from MIP data, changed similarly to the MIP pore radius, while that derived from MT remained unaltered. Solid phase density, measured by helium pycnometry, initially dropped slightly and then increased with the amount of oil added, which was most probably connected to changes in the formation of extremely small closed pores that were not available for He atoms.

  15. Modification of Lightweight Aggregates’ Microstructure by Used Motor Oil Addition

    PubMed Central

    Franus, Małgorzata; Jozefaciuk, Grzegorz; Bandura, Lidia; Lamorski, Krzysztof; Hajnos, Mieczysław; Franus, Wojciech

    2016-01-01

    An admixture of lightweight aggregate substrates (beidellitic clay containing 10 wt % of natural clinoptilolite or Na-P1 zeolite) with used motor oil (1 wt %–8 wt %) caused marked changes in the aggregates’ microstructure, measured by a combination of mercury porosimetry (MIP), microtomography (MT), and scanning electron microscopy. Maximum porosity was produced at low (1%–2%) oil concentrations and it dropped at higher concentrations, opposite to the aggregates’ bulk density. Average pore radii, measured by MIP, decreased with an increasing oil concentration, whereas larger (MT) pore sizes tended to increase. Fractal dimension, derived from MIP data, changed similarly to the MIP pore radius, while that derived from MT remained unaltered. Solid phase density, measured by helium pycnometry, initially dropped slightly and then increased with the amount of oil added, which was most probably connected to changes in the formation of extremely small closed pores that were not available for He atoms. PMID:28773964

  16. Biochemical and hematologic changes after short-term space flight

    NASA Technical Reports Server (NTRS)

    Leach, Carolyn S.

    1991-01-01

    Clinical laboratory data from blood samples obtained from astronauts before and after 28 flights (average duration = 6 days) of the Space Shuttle were analyzed by the paired t-test and the Wilcoxon signed-rank test and compared with data from the Skylab flights (duration = 28, 56, and 84 days). Angiotensin I and aldosterone were elevated immediately after short-term space flights, but the response of angiotensin I was delayed after Skylab flights. Serum calcium was not elevated after Shuttle flights, but magnesium and uric acid decreased after both Shuttle and Skylab. Creatine phosphokinase in serum was reduced after Shuttle but not Skylab flights, probably because exercises to prevent deconditioning were not performed on the Shuttle. Total cholesterol was unchanged after Shuttle flights, but low density lipoprotein cholesterol increased and high density lipoprotein cholesterol decreased. The concentration of red blood cells was elevated after Shuttle flights and reduced after Skylab flights.

  17. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  18. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  19. HCl dissociating on a rigid Au(111) surface: A six-dimensional quantum mechanical study on a new potential energy surface based on the RPBE functional.

    PubMed

    Liu, Tianhui; Fu, Bina; Zhang, Dong H

    2017-04-28

    The dissociative chemisorption of HCl on the Au(111) surface has recently been an interesting and important subject, regarding the discrepancy between the theoretical dissociation probabilities and the experimental sticking probabilities. We here constructed an accurate full-dimensional (six-dimensional (6D)) potential energy surface (PES) based on the density functional theory (DFT) with the revised Perdew-Burke-Ernzerhof (RPBE) functional, and performed 6D quantum mechanical (QM) calculations for HCl dissociating on a rigid Au(111) surface. The effects of vibrational excitations, rotational orientations, and site-averaging approximation on the present RPBE PES are investigated. Due to the much higher barrier height obtained on the RPBE PES than on the PW91 PES, the agreement between the present theoretical and experimental results is greatly improved. In particular, at the very low kinetic energy, the QM-RPBE dissociation probability agrees well with the experimental data. However, the computed QM-RPBE reaction probabilities are still markedly different from the experimental values at most of the energy regions. In addition, the QM-RPBE results achieve good agreement with the recent ab initio molecular dynamics calculations based on the RPBE functional at high kinetic energies.

  20. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  1. A plastic scintillator-based muon tomography system with an integrated muon spectrometer

    NASA Astrophysics Data System (ADS)

    Anghel, V.; Armitage, J.; Baig, F.; Boniface, K.; Boudjemline, K.; Bueno, J.; Charles, E.; Drouin, P.-L.; Erlandson, A.; Gallant, G.; Gazit, R.; Godin, D.; Golovko, V. V.; Howard, C.; Hydomako, R.; Jewett, C.; Jonkmans, G.; Liu, Z.; Robichaud, A.; Stocki, T. J.; Thompson, M.; Waller, D.

    2015-10-01

    A muon scattering tomography system which uses extruded plastic scintillator bars for muon tracking and a dedicated muon spectrometer that measures scattering through steel slabs has been constructed and successfully tested. The atmospheric muon detection efficiency is measured to be 97% per plane on average and the average intrinsic hit resolution is 2.5 mm. In addition to creating a variety of three-dimensional images of objects of interest, a quantitative study has been carried out to investigate the impact of including muon momentum measurements when attempting to detect high-density, high-Z material. As expected, the addition of momentum information improves the performance of the system. For a fixed data-taking time of 60 s and a fixed false positive fraction, the probability to detect a target increases when momentum information is used. This is the first demonstration of the use of muon momentum information from dedicated spectrometer measurements in muon scattering tomography.

  2. Statistical properties of edge plasma turbulence in the Large Helical Device

    NASA Astrophysics Data System (ADS)

    Dewhurst, J. M.; Hnat, B.; Ohno, N.; Dendy, R. O.; Masuzaki, S.; Morisaki, T.; Komori, A.

    2008-09-01

    Ion saturation current (Isat) measurements made by three tips of a Langmuir probe array in the Large Helical Device are analysed for two plasma discharges. Absolute moment analysis is used to quantify properties on different temporal scales of the measured signals, which are bursty and intermittent. Strong coherent modes in some datasets are found to distort this analysis and are consequently removed from the time series by applying bandstop filters. Absolute moment analysis of the filtered data reveals two regions of power-law scaling, with the temporal scale τ ≈ 40 µs separating the two regimes. A comparison is made with similar results from the Mega-Amp Spherical Tokamak. The probability density function is studied and a monotonic relationship between connection length and skewness is found. Conditional averaging is used to characterize the average temporal shape of the largest intermittent bursts.

  3. Maximum one-shot dissipated work from Rényi divergences

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  4. Maximum one-shot dissipated work from Rényi divergences.

    PubMed

    Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  5. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  6. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  7. Model-based segmentation of abdominal aortic aneurysms in CTA images

    NASA Astrophysics Data System (ADS)

    de Bruijne, Marleen; van Ginneken, Bram; Niessen, Wiro J.; Loog, Marco; Viergever, Max A.

    2003-05-01

    Segmentation of thrombus in abdominal aortic aneurysms is complicated by regions of low boundary contrast and by the presence of many neighboring structures in close proximity to the aneurysm wall. We present an automated method that is similar to the well known Active Shape Models (ASM), combining a three-dimensional shape model with a one-dimensional boundary appearance model. Our contribution is twofold: we developed a non-parametric appearance modeling scheme that effectively deals with a highly varying background, and we propose a way of generalizing models of curvilinear structures from small training sets. In contrast with the conventional ASM approach, the new appearance model trains on both true and false examples of boundary profiles. The probability that a given image profile belongs to the boundary is obtained using k nearest neighbor (kNN) probability density estimation. The performance of this scheme is compared to that of original ASMs, which minimize the Mahalanobis distance to the average true profile in the training set. The generalizability of the shape model is improved by modeling the objects axis deformation independent of its cross-sectional deformation. A leave-one-out experiment was performed on 23 datasets. Segmentation using the kNN appearance model significantly outperformed the original ASM scheme; average volume errors were 5.9% and 46% respectively.

  8. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  9. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  10. Surface slip during large Owens Valley earthquakes

    NASA Astrophysics Data System (ADS)

    Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.

    2016-06-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovich, P.; Carter, T. A.; Friedman, B.

    Numerical simulation of plasma turbulence in the Large Plasma Device (LAPD) [W. Gekelman, H. Pfister, Z. Lucky et al., Rev. Sci. Instrum. 62, 2875 (1991)] is presented. The model, implemented in the BOUndary Turbulence code [M. Umansky, X. Xu, B. Dudson et al., Contrib. Plasma Phys. 180, 887 (2009)], includes three-dimensional (3D) collisional fluid equations for plasma density, electron parallel momentum, and current continuity, and also includes the effects of ion-neutral collisions. In nonlinear simulations using measured LAPD density profiles but assuming constant temperature profile for simplicity, self-consistent evolution of instabilities and nonlinearly generated zonal flows results in a saturatedmore » turbulent state. Comparisons of these simulations with measurements in LAPD plasmas reveal good qualitative and reasonable quantitative agreement, in particular in frequency spectrum, spatial correlation, and amplitude probability distribution function of density fluctuations. For comparison with LAPD measurements, the plasma density profile in simulations is maintained either by direct azimuthal averaging on each time step, or by adding particle source/sink function. The inferred source/sink values are consistent with the estimated ionization source and parallel losses in LAPD. These simulations lay the groundwork for more a comprehensive effort to test fluid turbulence simulation against LAPD data.« less

  12. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  13. A Comparison of Grizzly Bear Demographic Parameters Estimated from Non-Spatial and Spatial Open Population Capture-Recapture Models

    PubMed Central

    Whittington, Jesse; Sawaya, Michael A.

    2015-01-01

    Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions. PMID:26230262

  14. Scale matters

    NASA Astrophysics Data System (ADS)

    Margolin, L. G.

    2018-04-01

    The applicability of Navier-Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman-Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics. This article is part of the theme issue `Hilbert's sixth problem'.

  15. Tidal tomography constrains Earth's deep-mantle buoyancy.

    PubMed

    Lau, Harriet C P; Mitrovica, Jerry X; Davis, James L; Tromp, Jeroen; Yang, Hsin-Ying; Al-Attar, David

    2017-11-15

    Earth's body tide-also known as the solid Earth tide, the displacement of the solid Earth's surface caused by gravitational forces from the Moon and the Sun-is sensitive to the density of the two Large Low Shear Velocity Provinces (LLSVPs) beneath Africa and the Pacific. These massive regions extend approximately 1,000 kilometres upward from the base of the mantle and their buoyancy remains actively debated within the geophysical community. Here we use tidal tomography to constrain Earth's deep-mantle buoyancy derived from Global Positioning System (GPS)-based measurements of semi-diurnal body tide deformation. Using a probabilistic approach, we show that across the bottom two-thirds of the two LLSVPs the mean density is about 0.5 per cent higher than the average mantle density across this depth range (that is, its mean buoyancy is minus 0.5 per cent), although this anomaly may be concentrated towards the very base of the mantle. We conclude that the buoyancy of these structures is dominated by the enrichment of high-density chemical components, probably related to subducted oceanic plates or primordial material associated with Earth's formation. Because the dynamics of the mantle is driven by density variations, our result has important dynamical implications for the stability of the LLSVPs and the long-term evolution of the Earth system.

  16. Supernova Driving. II. Compressive Ratio in Molecular-cloud Turbulence

    NASA Astrophysics Data System (ADS)

    Pan, Liubin; Padoan, Paolo; Haugbølle, Troels; Nordlund, Åke

    2016-07-01

    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of the compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.

  17. A computer simulation of free-volume distributions and related structural properties in a model lipid bilayer.

    PubMed Central

    Xiang, T X

    1993-01-01

    A novel combined approach of molecular dynamics (MD) and Monte Carlo simulations is developed to calculate various free-volume distributions as a function of position in a lipid bilayer membrane at 323 K. The model bilayer consists of 2 x 100 chain molecules with each chain molecule having 15 carbon segments and one head group and subject to forces restricting bond stretching, bending, and torsional motions. At a surface density of 30 A2/chain molecule, the probability density of finding effective free volume available to spherical permeants displays a distribution with two exponential components. Both pre-exponential factors, p1 and p2, remain roughly constant in the highly ordered chain region with average values of 0.012 and 0.00039 A-3, respectively, and increase to 0.049 and 0.0067 A-3 at the mid-plane. The first characteristic cavity size V1 is only weakly dependent on position in the bilayer interior with an average value of 3.4 A3, while the second characteristic cavity size V2 varies more dramatically from a plateau value of 12.9 A3 in the highly ordered chain region to 9.0 A3 in the center of the bilayer. The mean cavity shape is described in terms of a probability distribution for the angle at which the test permeant is in contact with one of and does not overlap with anyone of the chain segments in the bilayer. The results show that (a) free volume is elongated in the highly ordered chain region with its long axis normal to the bilayer interface approaching spherical symmetry in the center of the bilayer and (b) small free volume is more elongated than large free volume. The order and conformational structures relevant to the free-volume distributions are also examined. It is found that both overall and internal motions have comparable contributions to local disorder and couple strongly with each other, and the occurrence of kink defects has higher probability than predicted from an independent-transition model. Images FIGURE 1 PMID:8241390

  18. Toward a microscopic model of bidirectional synaptic plasticity

    PubMed Central

    Castellani, Gastone C.; Bazzani, Armando; Cooper, Leon N

    2009-01-01

    We show that a 2-step phospho/dephosphorylation cycle for the α-amino-3-hydroxy-5-methyl-4-isoxazole proprionic acid receptor (AMPAR), as used in in vivo learning experiments to assess long-term potentiation (LTP) induction and establishment, exhibits bistability for a wide range of parameters, consistent with values derived from biological literature. The AMPAR model we propose, hence, is a candidate for memory storage and switching behavior at a molecular-microscopic level. Furthermore, the stochastic formulation of the deterministic model leads to a mesoscopic interpretation by considering the effect of enzymatic fluctuations on the Michelis–Menten average dynamics. Under suitable hypotheses, this leads to a stochastic dynamical system with multiplicative noise whose probability density evolves according to a Fokker–Planck equation in the Stratonovich sense. In this approach, the probability density associated with each AMPAR phosphorylation state allows one to compute the probability of any concentration value, whereas the Michaelis–Menten equations consider the average concentration dynamics. We show that bistable dynamics are robust for multiplicative stochastic perturbations and that the presence of both noise and bistability simulates LTP and long-term depression (LTD) behavior. Interestingly, the LTP part of this model has been experimentally verified as a result of in vivo, one-trial inhibitory avoidance learning protocol in rats, that produced the same changes in hippocampal AMPARs phosphorylation state as observed with in vitro induction of LTP with high-frequency stimulation (HFS). A consequence of this model is the possibility of characterizing a molecular switch with a defined biochemical set of reactions showing bistability and bidirectionality. Thus, this 3-enzymes-based biophysical model can predict LTP as well as LTD and their transition rates. The theoretical results can be, in principle, validated by in vitro and in vivo experiments, such as fluorescence measurements and electrophysiological recordings at multiple scales, from molecules to neurons. A further consequence is that the bistable regime occurs only within certain parametric windows, which may simulate a “history-dependent threshold”. This effect might be related to the Bienenstock–Cooper–Munro theory of synaptic plasticity. PMID:19666550

  19. Modeling of the hydrogen maser disk in MWC 349

    NASA Astrophysics Data System (ADS)

    Ponomarev, Victor O.; Smith, Howard A.; Strelnitski, Vladimir S.

    1994-04-01

    Maser amplification in a Keplerian circumstellar disk seen edge on-the idea put forward by Gordon (1992), Martin-Pintado, & Serabyn (1992), and Thum, Martin-Pintado, & Bachiller (1992) to explain the millimeter hydrogen recombination lines in MWC 349-is further justified and developed here. The double-peaked (vs. possible triple-peaked) form of the observed spectra is explained by the reduced emission from the inner portion of the disk, the portion responsible for the central ('zero velocity') component of a triple-peaked spectrum. Radial gradient of electron density and/or free-free absorption within the disk are identified as the probable causes of this central 'hole' in the disk and of its opacity. We calculate a set of synthetic maser spectra radiated by a homogeneous Keplerian ring seen edge-on and compare them to the H30-alpha observations of Thum et al., averaged over about 1000 days. We used a simple graphical procedure to solve an inverse problem and deduced the probable values of some basic disk and maser parameters. We find that the maser is essentially unsaturated, and that the most probable values of electron temperature. Doppler width of the microturbulence, and electron density, all averaged along the amplification path are, correspondingly, Te less than or equal to 11,000 K, Vmicro less than or equal to 14 km/s, ne approx. = (3 +/- 2) x 107/cu cm. The model shows that radiation at every frequency within the spectrum arises in a monochromatic 'hot spot.' The maximum optical depth within the 'hot spot' producing radiation at the spectral peak maximum is taumax approx. = 6 +/- 1; the effective width of the masing ring is approx. = 0.4-0.7 times its outer diameter; the size of the 'hot spot' responsible for the radiation at the spectral peak frequency is approx. = 0.2-0.3 times the distance between the two 'hot spots' corresponding to two peaks. An important derivation of our model is the dynamical mass of the central star, M* approx. = 26 solar masses (D/1.2 kpc), D being the distance to the star. Prospects for improving the model are discussed.

  20. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  1. Detection of nuclear resonance signals: modification of the receiver operating characteristics using feedback.

    PubMed

    Blauch, A J; Schiano, J L; Ginsberg, M D

    2000-06-01

    The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.

  2. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  3. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  4. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  5. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  6. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  7. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  8. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  9. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  10. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  11. Persisting effects of armored military maneuvers on some soils of the Mojave Desert

    USGS Publications Warehouse

    Prose, D.V.

    1985-01-01

    Soil compaction and substrate modification produced during large-scale armored military maneuvers in the early 1940s were examined in 1981 at seven sites in California's eastern Mojave Desert Recording penetrometer measurements show that tracks left by a single pass of an M3 "medium" tank have average soil resistance values that are 50% greater than those of the surrounding untracked soil in the upper 20 cm At one site, measurements made along short segments of track that have been visually eliminated by erosion and deposition processes show a 73% increase in penetrometer resistance over adjacent, undisturbed soils Dirt roadways at three former base camp locations could not be penetrated below 5-10 cm because of extreme compaction Soil bulk density was not as sensitive an indicator of soil compaction as was penetrometer resistance Density values in the upper 10 cm of soil are not significantly different between tank tracks and undisturbed soils at most sites, and roadways at two base camps show an average increase in bulk density of only 12% over adjacent soils. Trench excavations across tank tracks show that physical modifications of the substrate can extend vertically beneath a track to a depth of 25 cm and outward from a track's edge to 50 cm These soil disturbances are probably major factors that encourage accelerated soil erosion throughout the manuever area and also retard or prevent the return of vegetation to pre-disturbance conditions ?? 1985 Springer-Verlag New York Inc.

  12. Effects of plantation density on wood density and anatomical properties of red pine (Pinus resinosa Ait.)

    Treesearch

    J. Y. Zhu; C. Tim Scott; Karen L. Scallon; Gary C. Myers

    2007-01-01

    This study demonstrated that average ring width (or average annual radial growth rate) is a reliable parameter to quantify the effects of tree plantation density (growth suppression) on wood density and tracheid anatomical properties. The average ring width successfully correlated wood density and tracheid anatomical properties of red pines (Pinus resinosa Ait.) from a...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ce; Auger, Maria A.; Moody, Michael P.

    In this study, Ferritic/Martensitic (F/M) HT9 steel was irradiated to 20 displacements per atom (dpa) at 600 nm depth at 420 and 440 °C, and to 1, 10 and 20 dpa at 600 nm depth at 470 °C using 5 MeV Fe++ ions. The characterization was conducted using ChemiSTEM and Atom Probe Tomography (APT), with a focus on radiation induced segregation and precipitation. Ni and/or Si segregation at defect sinks (grain boundaries, dislocation lines, carbide/matrix interfaces) together with Ni, Si, Mn rich G-phase precipitation were observed in self-ion irradiated HT9 except in very low dose case (1 dpa at 470more » °C). Some G-phase precipitates were found to nucleate heterogeneously at defect sinks where Ni and/or Si segregated. In contrast to what was previously reported in the literature for neutron irradiated HT9, no Cr-rich α' phase, χ-phases, η phase and voids were found in self-ion irradiated HT9. The difference of observed microstructures is probably due to the difference of irradiation dose rate between ion irradiation and neutron irradiation. In addition, the average size and number density of G-phase precipitates were found to be sensitive to both irradiation temperature and dose. With the same irradiation dose, the average size of G-phase increased whereas the number density decreased with increasing irradiation temperature. Within the same irradiation temperature, the average size increased with increasing irradiation dose.« less

  14. Going through a quantum phase

    NASA Technical Reports Server (NTRS)

    Shapiro, Jeffrey H.

    1992-01-01

    Phase measurements on a single-mode radiation field are examined from a system-theoretic viewpoint. Quantum estimation theory is used to establish the primacy of the Susskind-Glogower (SG) phase operator; its phase eigenkets generate the probability operator measure (POM) for maximum likelihood phase estimation. A commuting observables description for the SG-POM on a signal x apparatus state space is derived. It is analogous to the signal-band x image-band formulation for optical heterodyne detection. Because heterodyning realizes the annihilation operator POM, this analogy may help realize the SG-POM. The wave function representation associated with the SG POM is then used to prove the duality between the phase measurement and the number operator measurement, from which a number-phase uncertainty principle is obtained, via Fourier theory, without recourse to linearization. Fourier theory is also employed to establish the principle of number-ket causality, leading to a Paley-Wiener condition that must be satisfied by the phase-measurement probability density function (PDF) for a single-mode field in an arbitrary quantum state. Finally, a two-mode phase measurement is shown to afford phase-conjugate quantum communication at zero error probability with finite average photon number. Application of this construct to interferometric precision measurements is briefly discussed.

  15. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  16. Field measurement of basal forces generated by erosive debris flows

    USGS Publications Warehouse

    McCoy, S.W.; Tucker, G.E.; Kean, J.W.; Coe, J.A.

    2013-01-01

    It has been proposed that debris flows cut bedrock valleys in steeplands worldwide, but field measurements needed to constrain mechanistic models of this process remain sparse due to the difficulty of instrumenting natural flows. Here we present and analyze measurements made using an automated sensor network, erosion bolts, and a 15.24 cm by 15.24 cm force plate installed in the bedrock channel floor of a steep catchment. These measurements allow us to quantify the distribution of basal forces from natural debris‒flow events that incised bedrock. Over the 4 year monitoring period, 11 debris‒flow events scoured the bedrock channel floor. No clear water flows were observed. Measurements of erosion bolts at the beginning and end of the study indicated that the bedrock channel floor was lowered by 36 to 64 mm. The basal force during these erosive debris‒flow events had a large‒magnitude (up to 21 kN, which was approximately 50 times larger than the concurrent time‒averaged mean force), high‒frequency (greater than 1 Hz) fluctuating component. We interpret these fluctuations as flow particles impacting the bed. The resulting variability in force magnitude increased linearly with the time‒averaged mean basal force. Probability density functions of basal normal forces were consistent with a generalized Pareto distribution, rather than the exponential distribution that is commonly found in experimental and simulated monodispersed granular flows and which has a lower probability of large forces. When the bed sediment thickness covering the force plate was greater than ~ 20 times the median bed sediment grain size, no significant fluctuations about the time‒averaged mean force were measured, indicating that a thin layer of sediment (~ 5 cm in the monitored cases) can effectively shield the subjacent bed from erosive impacts. Coarse‒grained granular surges and water‒rich, intersurge flow had very similar basal force distributions despite differences in appearance and bulk‒flow density. These results demonstrate that debris flows can have strong control on rates of steepland evolution and contribute to a foundation needed for modeling debris‒flow incision stochastically.

  17. Small-scale plasma, magnetic, and neutral density fluctuations in the nightside Venus ionosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoegy, W.R.; Brace, L.H.; Kasprazak, W.T.

    1990-04-01

    Pioneer Venus orbiter measurements have shown that coherent small-scale waves exist in the electron density, the electron temperature, and the magnetic field in the lower ionosphere of Venus just downstream of the solar terminator (Brace et al., 1983). The waves become less regular and less coherent at larger solar zenith angles, and Brace et al. suggested that these structures may have evolved from the terminator waves as they are convected into the nightside ionosphere, driven by the day-to-night plasma pressure gradient. In this paper the authors describe the changes in wave characteristics with solar zenith angle and show that themore » neutral gas also has related wave characteristics, probably because of atmospheric gravity waves. The plasma pressure exceeds the magnetic pressure in the nightside ionosphere at these altitudes, and thus the magnetic field is carried along and controlled by the turbulent motion of the plasma, but the wavelike nature of the thermosphere may also be coupled to the plasma and magnetic structure. They show that there is a significant coherence between the ionosphere, thermosphere, and magnetic parameters at altitudes below about 185 km, a coherence which weakens in the antisolar region. The electron temperature and density are approximately 180{degree} out of phase and consistently exhibit the highest correlation of any pair of variables. Waves in the electron and neutral densities are moderately correlated on most orbits, but with a phase difference that varies within each orbit. The average electron temperature is higher when the average magnetic field is more horizontal; however, the correlation between temperature and dip angle does not extend to individual wave structures observed within a satellite pass, particularly in the antisolar region.« less

  18. Turbulent fluctuations during pellet injection into a dipole confined plasma torus

    DOE PAGES

    Garnier, D. T.; Mauel, M. E.; Roberts, T. M.; ...

    2017-01-01

    Here, we report measurements of the turbulent evolution of the plasma density profile following the fast injection of lithium pellets into the Levitated Dipole Experiment (LDX) [Boxer et al., Nat. Phys. 6, 207 (2010)]. As the pellet passes through the plasma, it provides a significant internal particle source and allows investigation of density profile evolution, turbulent relaxation, and turbulent fluctuations. The total electron number within the dipole plasma torus increases by more than a factor of three, and the central density increases by more than a factor of five. During these large changes in density, the shape of the densitymore » profile is nearly “stationary” such that the gradient of the particle number within tubes of equal magnetic flux vanishes. In comparison to the usual case, when the particle source is neutral gas at the plasma edge, the internal source from the pellet causes the toroidal phase velocity of the fluctuations to reverse and changes the average particle flux at the plasma edge. An edge particle source creates an inward turbulent pinch, but an internal particle source increases the outward turbulent particle flux. Statistical properties of the turbulence are measured by multiple microwave interferometers and by an array of probes at the edge. The spatial structures of the largest amplitude modes have long radial and toroidal wavelengths. Estimates of the local and toroidally averaged turbulent particle flux show intermittency and a non-Gaussian probability distribution function. The measured fluctuations, both before and during pellet injection, have frequency and wave number dispersion consistent with theoretical expectations for interchange and entropy modes excited within a dipole plasma torus having warm electrons and cool ions.« less

  19. An empirical probability density distribution of planetary ionosphere storms with geomagnetic precursors

    NASA Astrophysics Data System (ADS)

    Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan

    The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100

  20. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  1. Spatial distribution and sequential sampling plans for Tuta absoluta (Lepidoptera: Gelechiidae) in greenhouse tomato crops.

    PubMed

    Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino

    2015-09-01

    The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.

  2. Cache-enabled small cell networks: modeling and tradeoffs.

    PubMed

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  3. Traffic handling capability of a broadband indoor wireless network using CDMA multiple access

    NASA Astrophysics Data System (ADS)

    Zhang, Chang G.; Hafez, H. M.; Falconer, David D.

    1994-05-01

    CDMA (code division multiple access) may be an attractive technique for wireless access to broadband services because of its multiple access simplicity and other appealing features. In order to investigate traffic handling capabilities of a future network providing a variety of integrated services, this paper presents a study of a broadband indoor wireless network supporting high-speed traffic using CDMA multiple access. The results are obtained through the simulation of an indoor environment and the traffic capabilities of the wireless access to broadband 155.5 MHz ATM-SONET networks using the mm-wave band. A distributed system architecture is employed and the system performance is measured in terms of call blocking probability and dropping probability. The impacts of the base station density, traffic load, average holding time, and variable traffic sources on the system performance are examined. The improvement of system performance by implementing various techniques such as handoff, admission control, power control and sectorization are also investigated.

  4. Characterizing the radial content of orbital-angular-momentum photonic states impaired by weak-to-strong atmospheric turbulence.

    PubMed

    Chen, Chunyi; Yang, Huamin

    2016-08-22

    The changes in the radial content of orbital-angular-momentum (OAM) photonic states described by Laguerre-Gaussian (LG) modes with a radial index of zero, suffering from turbulence-induced distortions, are explored by numerical simulations. For a single-photon field with a given LG mode propagating through weak-to-strong atmospheric turbulence, both the average LG and OAM mode densities are dependent only on two nondimensional parameters, i.e., the Fresnel ratio and coherence-width-to-beam-radius (CWBR) ratio. It is found that atmospheric turbulence causes the radially-adjacent-mode mixing, besides the azimuthally-adjacent-mode mixing, in the propagated photonic states; the former is relatively slighter than the latter. With the same Fresnel ratio, the probabilities that a photon can be found in the zero-index radial mode of intended OAM states in terms of the relative turbulence strength behave very similarly; a smaller Fresnel ratio leads to a slower decrease in the probabilities as the relative turbulence strength increases. A photon can be found in various radial modes with approximately equal probability when the relative turbulence strength turns great enough. The use of a single-mode fiber in OAM measurements can result in photon loss and hence alter the observed transition probability between various OAM states. The bit error probability in OAM-based free-space optical communication systems that transmit photonic modes belonging to the same orthogonal LG basis may depend on what digit is sent.

  5. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  6. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  7. Organic and inorganic molecules as probes of mineral surfaces (Invited)

    NASA Astrophysics Data System (ADS)

    Sverjensky, D. A.

    2010-12-01

    Although the multi-site nature of mineral surfaces is to be expected based on the underlying crystal structure, definitive evidence of the need to use more than one site in modelling proton surface charge or adsorption of a single adsorbate at the mineral-water interface is lacking. Instead, a single-site approach affords a practical way of averaging over all possible crystal planes and sites in a powdered mineral sample. Extensive analysis of published proton surface charge and adsorption of metals on oxide mineral surfaces can be undertaken with a single site density for each mineral based on tritium exchange or estimation from averages of the site densities of likely exposed surfaces. Even in systems with competing metals (e.g. Cu and Pb on hematite), the same site density as used for proton surface charge can be employed depending on the reaction stoichiometry. All of this indicates that protons and metals can bind to a great variety of sites with the same overall site density. However, simple oxyanions such as carbonate, sulfate, selenate, arsenate and arsenite require a much lower site density for a given mineral. For example, on goethite these oxyanions utilize a site density that correlates with the BET surface area of the goethite. In this way, the oxyanions can be thought of as selectively probing the available sites on the mineral. The correlation probably arises because goethites with different BET surface areas have different proportions of singly and multiply-bonded oxygens, and only the singly-bonded oxygens are useful for inner-sphere surface complexation by the ligand exchange mechanism. Small organic molecules behave in a remarkably similar way. For example, adsorption of oxalate on goethite, and aspartate, glutamate, dihydroxyphenylalanine, lysine and arginine on rutile are all consistent with a much smaller site density than those required for metals such as calcium or neodymium. Overall, these results suggest that both inorganic oxyanions and organic molecules containing carboxylate functional groups serve as much more sensitive probes of the surface structures of minerals than do protons or metals.

  8. On the abundance of extraterrestrial life after the Kepler mission

    NASA Astrophysics Data System (ADS)

    Wandel, Amri

    2015-07-01

    The data recently accumulated by the Kepler mission have demonstrated that small planets are quite common and that a significant fraction of all stars may have an Earth-like planet within their habitable zone. These results are combined with a Drake-equation formalism to derive the space density of biotic planets as a function of the relatively modest uncertainty in the astronomical data and of the (yet unknown) probability for the evolution of biotic life, F b. I suggest that F b may be estimated by future spectral observations of exoplanet biomarkers. If F b is in the range 0.001-1, then a biotic planet may be expected within 10-100 light years from Earth. Extending the biotic results to advanced life I derive expressions for the distance to putative civilizations in terms of two additional Drake parameters - the probability for evolution of a civilization, F c, and its average longevity. For instance, assuming optimistic probability values (F b~F c~1) and a broadcasting longevity of a few thousand years, the likely distance to the nearest civilizations detectable by searching for intelligent electromagnetic signals is of the order of a few thousand light years. The probability of detecting intelligent signals with present and future radio telescopes is calculated as a function of the Drake parameters. Finally, I describe how the detection of intelligent signals would constrain the Drake parameters.

  9. Noise reduction in heat-assisted magnetic recording of bit-patterned media by optimizing a high/low Tc bilayer structure

    NASA Astrophysics Data System (ADS)

    Muthsam, O.; Vogler, C.; Suess, D.

    2017-12-01

    It is assumed that heat-assisted magnetic recording is the recording technique of the future. For pure hard magnetic grains in high density media with an average diameter of 5 nm and a height of 10 nm, the switching probability is not sufficiently high for the use in bit-patterned media. Using a bilayer structure with 50% hard magnetic material with low Curie temperature and 50% soft magnetic material with high Curie temperature to obtain more than 99.2% switching probability leads to very large jitter. We propose an optimized material composition to reach a switching probability of Pswitch > 99.2% and simultaneously achieve the narrow transition jitter of pure hard magnetic material. Simulations with a continuous laser spot were performed with the atomistic simulation program VAMPIRE for a single cylindrical recording grain with a diameter of 5 nm and a height of 10 nm. Different configurations of soft magnetic material and different amounts of hard and soft magnetic material were tested and discussed. Within our analysis, a composition with 20% soft magnetic and 80% hard magnetic material reaches the best results with a switching probability Pswitch > 99.2%, an off-track jitter parameter σoff,80/20 = 0.46 nm and a down-track jitter parameter σdown,80/20 = 0.49 nm.

  10. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  12. Dramatic Dione

    NASA Image and Video Library

    2018-03-12

    Cassini captured this striking view of Saturn's moon Dione on July 23, 2012. Dione is about 698 miles (1,123 kilometers) across. Its density suggests that about a third of the moon is made up of a dense core (probably silicate rock) with the remainder of its material being water ice. At Dione's average temperature of -304 degrees Fahrenheit (-186 degrees Celsius), ice is so hard it behaves like rock. The image was taken with Cassini's narrow-angle camera at a distance of approximately 260,000 miles (418,000 kilometers) from Dione, through a polarized filter and a spectral filter sensitive to green light. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA17197

  13. LES, DNS and RANS for the analysis of high-speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman

    1994-01-01

    The objective of this research is to continue our efforts in advancing the state of knowledge in Large Eddy Simulation (LES), Direct Numerical Simulation (DNS), and Reynolds Averaged Navier Stokes (RANS) methods for the analysis of high-speed reacting turbulent flows. In the first phase of this research, conducted within the past six months, focus was in three directions: RANS of turbulent reacting flows by Probability Density Function (PDF) methods, RANS of non-reacting turbulent flows by advanced turbulence closures, and LES of mixing dominated reacting flows by a dynamics subgrid closure. A summary of our efforts within the past six months of this research is provided in this semi-annual progress report.

  14. Multiple mobility edges in a 1D Aubry chain with Hubbard interaction in presence of electric field: Controlled electron transport

    NASA Astrophysics Data System (ADS)

    Saha, Srilekha; Maiti, Santanu K.; Karmakar, S. N.

    2016-09-01

    Electronic behavior of a 1D Aubry chain with Hubbard interaction is critically analyzed in presence of electric field. Multiple energy bands are generated as a result of Hubbard correlation and Aubry potential, and, within these bands localized states are developed under the application of electric field. Within a tight-binding framework we compute electronic transmission probability and average density of states using Green's function approach where the interaction parameter is treated under Hartree-Fock mean field scheme. From our analysis we find that selective transmission can be obtained by tuning injecting electron energy, and thus, the present model can be utilized as a controlled switching device.

  15. Analysis of mean seismic ground motion and its uncertainty based on the UCERF3 geologic slip rate model with uncertainty for California

    USGS Publications Warehouse

    Zeng, Yuehua

    2018-01-01

    The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.

  16. A Waveform Detector that Targets Template-Decorrelated Signals and Achieves its Predicted Performance: Demonstration with IMS Data

    NASA Astrophysics Data System (ADS)

    Carmichael, J.

    2016-12-01

    Waveform correlation detectors used in seismic monitoring scan multichannel data to test two competing hypotheses: that data contain (1) a noisy, amplitude-scaled version of a template waveform, or, (2) only noise. In reality, seismic wavefields include signals triggered by non-target sources (background seismicity) and target signals that are only partially correlated with the waveform template. We reform the waveform correlation detector hypothesis test to accommodate deterministic uncertainty in template/target waveform similarity and thereby derive a new detector from convex set projections (the "cone detector") for use in explosion monitoring. Our analyses give probability density functions that quantify the detectors' degraded performance with decreasing waveform similarity. We then apply our results to three announced North Korean nuclear tests and use International Monitoring System (IMS) arrays to determine the probability that low magnitude, off-site explosions can be reliably detected with a given waveform template. We demonstrate that cone detectors provide (1) an improved predictive capability over correlation detectors to identify such spatially separated explosive sources, (2) competitive detection rates, and (3) reduced false alarms on background seismicity. Figure Caption: Observed and predicted receiver operating characteristic curves for correlation statistic r(x) (left) and cone statistic s(x) (right) versus semi-empirical explosion magnitude. a: Shaded region shows range of ROC curves for r(x) that give the predicted detection performance in noise conditions recorded over 24 hrs on 8 October 2006. Superimposed stair plot shows the empirical detection performance (recorded detections/total events) averaged over 24 hr of data. Error bars indicate the demeaned range in observed detection probability over the day; means are removed to avoid risk of misinterpreting range to indicate probabilities can exceed one. b: Shaded region shows range of ROC curves for s(x) that give the predicted detection performance for the cone detector. Superimposed stair plot show observed detection performance averaged over 24 hr of data analogous to that shown in a.

  17. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  18. Zealotry effects on opinion dynamics in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Klamser, Pascal P.; Wiedermann, Marc; Donges, Jonathan F.; Donner, Reik V.

    2017-11-01

    The adaptive voter model has been widely studied as a conceptual model for opinion formation processes on time-evolving social networks. Past studies on the effect of zealots, i.e., nodes aiming to spread their fixed opinion throughout the system, only considered the voter model on a static network. Here we extend the study of zealotry to the case of an adaptive network topology co-evolving with the state of the nodes and investigate opinion spreading induced by zealots depending on their initial density and connectedness. Numerical simulations reveal that below the fragmentation threshold a low density of zealots is sufficient to spread their opinion to the whole network. Beyond the transition point, zealots must exhibit an increased degree as compared to ordinary nodes for an efficient spreading of their opinion. We verify the numerical findings using a mean-field approximation of the model yielding a low-dimensional set of coupled ordinary differential equations. Our results imply that the spreading of the zealots' opinion in the adaptive voter model is strongly dependent on the link rewiring probability and the average degree of normal nodes in comparison with that of the zealots. In order to avoid a complete dominance of the zealots' opinion, there are two possible strategies for the remaining nodes: adjusting the probability of rewiring and/or the number of connections with other nodes, respectively.

  19. Biochemical and hematologic changes after short-term space flight

    NASA Technical Reports Server (NTRS)

    Leach, C. S.

    1992-01-01

    Clinical laboratory data from blood samples obtained from astronauts before and after 28 flights (average duration = 6 days) of the Space Shuttle were analyzed by the paired t-test and the Wilcoxon signed-rank test and compared with data from the Skylab flights (duration approximately 28, 59, and 84 days). Angiotensin I and aldosterone were elevated immediately after short-term space flights, but the response of angiotensin I was delayed after Skylab flights. Serum calcium was not elevated after Shuttle flights, but magnesium and uric acid decreased after both Shuttle and Skylab. Creatine phosphokinase in serum was reduced after Shuttle but not Skylab flights, probably because exercises to prevent deconditioning were not performed on the Shuttle. Total cholesterol was unchanged after Shuttle flights, but low density lipoprotein cholesterol increased and high density lipoprotein cholesterol decreased. The concentration of red blood cells was elevated after Shuttle flights and reduced after Skylab flights. Reticulocyte count was decreased after both short- and long-term flights, indicating that a reduction in red blood cell mass is probably more closely related to suppression of red cell production than to an increase in destruction of erythrocytes. Serum ferritin and number of platelets were also elevated after Shuttle flights. In determining the reasons for postflight differences between the shorter and longer flights, it is important to consider not only duration but also countermeasures, differences between spacecraft, and procedures for landing and egress.

  20. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  1. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  2. Spectral density of mixtures of random density matrices for qubits

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Wang, Jiamei; Chen, Zhihua

    2018-06-01

    We derive the spectral density of the equiprobable mixture of two random density matrices of a two-level quantum system. We also work out the spectral density of mixture under the so-called quantum addition rule. We use the spectral densities to calculate the average entropy of mixtures of random density matrices, and show that the average entropy of the arithmetic-mean-state of n qubit density matrices randomly chosen from the Hilbert-Schmidt ensemble is never decreasing with the number n. We also get the exact value of the average squared fidelity. Some conjectures and open problems related to von Neumann entropy are also proposed.

  3. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  4. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  5. Derived distribution of floods based on the concept of partial area coverage with a climatic appeal

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro

    2000-02-01

    A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.

  6. Fidelity and breeding probability related to population density and individual quality in black brent geese Branta bernicla nigricans

    USGS Publications Warehouse

    Sedinger, J.S.; Chelgren, N.D.; Ward, D.H.; Lindberg, M.S.

    2008-01-01

    1. Patterns of temporary emigration (associated with non-breeding) are important components of variation in individual quality. Permanent emigration from the natal area has important implications for both individual fitness and local population dynamics. 2. We estimated both permanent and temporary emigration of black brent geese (Branta bernicla nigricans Lawrence) from the Tutakoke River colony, using observations of marked brent geese on breeding and wintering areas, and recoveries of ringed individuals by hunters. We used the likelihood developed by Lindberg, Kendall, Hines & Anderson 2001 (Combining band recovery data and Pollock's robust design to model temporary and permanent emigration. Biometrics, 57, 273-281) to assess hypotheses and estimate parameters. 3. Temporary emigration (the converse of breeding) varied among age classes up to age 5, and differed between individuals that bred in the previous years vs. those that did not. Consistent with the hypothesis of variation in individual quality, individuals with a higher probability of breeding in one year also had a higher probability of breeding the next year. 4. Natal fidelity of females ranged from 0.70 ?? 0.07-0.96 ?? 0.18 and averaged 0.83. In contrast to Lindberg et al. (1998), we did not detect a relationship between fidelity and local population density. Natal fidelity was negatively correlated with first-year survival, suggesting that competition among individuals of the same age for breeding territories influenced dispersal. Once females nested at the Tutakoke River, colony breeding fidelity was 1.0. 5. Our analyses show substantial variation in individual quality associated with fitness, which other analyses suggest is strongly influenced by early environment. Our analyses also suggest substantial interchange among breeding colonies of brent geese, as first shown by Lindberg et al. (1998).

  7. SUPERNOVA DRIVING. II. COMPRESSIVE RATIO IN MOLECULAR-CLOUD TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Liubin; Padoan, Paolo; Haugbølle, Troels

    2016-07-01

    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been previously studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence (250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence with a broad lognormal distribution of themore » compressive ratio, with a mean value ≈0.3, lower than the equilibrium value of ≈0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor are the mean cloud radial (expansion or contraction) and solid-body rotation velocities. Furthermore, the clouds follow a general relation between the rms density and the rms Mach number similar to that of supersonic isothermal turbulence, though with a large scatter, and their average gas density probability density function is described well by a lognormal distribution, with the addition of a high-density power-law tail when self-gravity is included.« less

  8. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    PubMed

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  9. 3D joint inversion modeling of the lithospheric density structure based on gravity, geoid and topography data — Application to the Alborz Mountains (Iran) and South Caspian Basin region

    NASA Astrophysics Data System (ADS)

    Motavalli-Anbaran, Seyed-Hani; Zeyen, Hermann; Ebrahimzadeh Ardestani, Vahid

    2013-02-01

    We present a 3D algorithm to obtain the density structure of the lithosphere from joint inversion of free air gravity, geoid and topography data based on a Bayesian approach with Gaussian probability density functions. The algorithm delivers the crustal and lithospheric thicknesses and the average crustal density. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as use of a priori information like crustal thicknesses from seismic profiles. The algorithm is applied to synthetic models in order to demonstrate its usefulness. A real data application is presented for the area of northern Iran (with the Alborz Mountains as main target) and the South Caspian Basin. The resulting model shows an important crustal root (up to 55 km) under the Alborz Mountains and a thin crust (ca. 30 km) under the southernmost South Caspian Basin thickening northward to the Apsheron-Balkan Sill to 45 km. Central and NW Iran is underlain by a thin lithosphere (ca. 90-100 km). The lithosphere thickens under the South Caspian Basin until the Apsheron-Balkan Sill where it reaches more than 240 km. Under the stable Turan platform, we find a lithospheric thickness of 160-180 km.

  10. Detection of the nipple in automated 3D breast ultrasound using coronal slab-average-projection and cumulative probability map

    NASA Astrophysics Data System (ADS)

    Kim, Hannah; Hong, Helen

    2014-03-01

    We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.

  11. Control of Carbon Nanotube Density and Tower Height in an Array

    NASA Technical Reports Server (NTRS)

    Delzeit, Lance D. (Inventor); Schipper, John F. (Inventor)

    2010-01-01

    A method for controlling density or tower height of carbon nanotube (CNT) arrays grown in spaced apart first and second regions on a substrate. CNTs having a first density range (or first tower height range) are grown in the first region using a first source temperature range for growth. Subsequently or simultaneously, CNTs having a second density range (or second tower height range), having an average density (or average tower height) in the second region different from the average density (or average tower height) for the first region, are grown in the second region, using supplemental localized hearing for the second region. Application for thermal dissipation and/or dissipation of electrical charge or voltage in an electronic device are discussed.

  12. Semiclassical electron transport at the edge of a two-dimensional topological insulator: Interplay of protected and unprotected modes

    NASA Astrophysics Data System (ADS)

    Khalaf, E.; Skvortsov, M. A.; Ostrovsky, P. M.

    2016-03-01

    We study electron transport at the edge of a generic disordered two-dimensional topological insulator, where some channels are topologically protected from backscattering. Assuming the total number of channels is large, we consider the edge as a quasi-one-dimensional quantum wire and describe it in terms of a nonlinear sigma model with a topological term. Neglecting localization effects, we calculate the average distribution function of transmission probabilities as a function of the sample length. We mainly focus on the two experimentally relevant cases: a junction between two quantum Hall (QH) states with different filling factors (unitary class) and a relatively thick quantum well exhibiting quantum spin Hall (QSH) effect (symplectic class). In a QH sample, the presence of topologically protected modes leads to a strong suppression of diffusion in the other channels already at scales much shorter than the localization length. On the semiclassical level, this is accompanied by the formation of a gap in the spectrum of transmission probabilities close to unit transmission, thereby suppressing shot noise and conductance fluctuations. In the case of a QSH system, there is at most one topologically protected edge channel leading to weaker transport effects. In order to describe `topological' suppression of nearly perfect transparencies, we develop an exact mapping of the semiclassical limit of the one-dimensional sigma model onto a zero-dimensional sigma model of a different symmetry class, allowing us to identify the distribution of transmission probabilities with the average spectral density of a certain random-matrix ensemble. We extend our results to other symmetry classes with topologically protected edges in two dimensions.

  13. Quantum walks: The first detected passage time problem

    NASA Astrophysics Data System (ADS)

    Friedman, H.; Kessler, D. A.; Barkai, E.

    2017-03-01

    Even after decades of research, the problem of first passage time statistics for quantum dynamics remains a challenging topic of fundamental and practical importance. Using a projective measurement approach, with a sampling time τ , we obtain the statistics of first detection events for quantum dynamics on a lattice, with the detector located at the origin. A quantum renewal equation for a first detection wave function, in terms of which the first detection probability can be calculated, is derived. This formula gives the relation between first detection statistics and the solution of the corresponding Schrödinger equation in the absence of measurement. We illustrate our results with tight-binding quantum walk models. We examine a closed system, i.e., a ring, and reveal the intricate influence of the sampling time τ on the statistics of detection, discussing the quantum Zeno effect, half dark states, revivals, and optimal detection. The initial condition modifies the statistics of a quantum walk on a finite ring in surprising ways. In some cases, the average detection time is independent of the sampling time while in others the average exhibits multiple divergences as the sampling time is modified. For an unbounded one-dimensional quantum walk, the probability of first detection decays like (time)(-3 ) with superimposed oscillations, with exceptional behavior when the sampling period τ times the tunneling rate γ is a multiple of π /2 . The amplitude of the power-law decay is suppressed as τ →0 due to the Zeno effect. Our work, an extended version of our previously published paper, predicts rich physical behaviors compared with classical Brownian motion, for which the first passage probability density decays monotonically like (time)-3 /2, as elucidated by Schrödinger in 1915.

  14. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  15. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  16. Short-term response of Dicamptodon tenebrosus larvae to timber management in southwestern Oregon

    USGS Publications Warehouse

    Leuthold, Niels; Adams, Michael J.; Hayes, John P.

    2012-01-01

    In the Pacific Northwest, previous studies have found a negative effect of timber management on the abundance of stream amphibians, but results have been variable and region specific. These studies have generally used survey methods that did not account for differences in capture probability and focused on stands that were harvested under older management practices. We examined the influences of contemporary forest practices on larval Dicamptodon tenebrosus as part of the Hinkle Creek paired watershed study. We used a mark-recapture analysis to estimate D. tenebrosus density at 100 1-m sites spread throughout the basin and used extended linear models that accounted for correlation resulting from the repeated surveys at sites across years. Density was associated with substrate, but we found no evidence of an effect of harvest. While holding other factors constant, the model-averaged estimates indicated; 1) each 10% increase in small cobble or larger substrate increased median density of D. tenebrosus 1.05 times, 2) each 100-ha increase in the upstream area drained decreased median density of D. tenebrosus 0.96 times, and 3) increasing the fish density in the 40 m around a site by 0.01 increased median salamander density 1.01 times. Although this study took place in a single basin, it suggests that timber management in similar third-order basins of the southwestern Oregon Cascade foothills is unlikely to have short-term effects of D. tenebrosus larvae.

  17. Avian associations of the Northern Great Plains grasslands

    USGS Publications Warehouse

    Kantrud, H.A.; Kologiski, R.L.

    1983-01-01

    The grassland region of the northern Great Plains was divided into six broad subregions by application of an avian indicator species analysis to data obtained from 582 sample plots censused during the breeding season. Common, ubiquitous species and rare species had little classificatory value and were eliminated from the data set used to derive the avian associations. Initial statistical division of the plots likely reflected structure of the dominant plant species used for nesting; later divisions probably were related to foraging or nesting cover requirements based on vegetation height or density, habitat heterogeneity, or possibly to the existence of mutually similar distributions or shared areas of greater than average abundance for certain groups of species. Knowledge of the effects of grazing, mostly by cattle, on habitat use by the breeding bird species was used to interpret the results of the indicator species analysis. Moderate grazing resulted in greater species richness in nearly all subregions; effects of grazing on total bird density were more variable.

  18. Particle-sampling statistics in laser anemometers Sample-and-hold systems and saturable systems

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1983-01-01

    The effect of the data-processing system on the particle statistics obtained with laser anemometry of flows containing suspended particles is examined. Attention is given to the sample and hold processor, a pseudo-analog device which retains the last measurement until a new measurement is made, followed by time-averaging of the data. The second system considered features a dead time, i.e., a saturable system with a significant reset time with storage in a data buffer. It is noted that the saturable system operates independent of the particle arrival rate. The probabilities of a particle arrival in a given time period are calculated for both processing systems. It is shown that the system outputs are dependent on the mean particle flow rate, the flow correlation time, and the flow statistics, indicating that the particle density affects both systems. The results are significant for instances of good correlation between the particle density and velocity, such as occurs near the edge of a jet.

  19. Fast visible imaging of turbulent plasma in TORPEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iraji, D.; Diallo, A.; Fasoli, A.

    2008-10-15

    Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less

  20. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  1. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  2. Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.

    PubMed

    Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas

    2015-02-01

    The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  3. Radiation induced segregation and precipitation behavior in self-ion irradiated Ferritic/Martensitic HT9 steel

    DOE PAGES

    Zheng, Ce; Auger, Maria A.; Moody, Michael P.; ...

    2017-04-24

    In this study, Ferritic/Martensitic (F/M) HT9 steel was irradiated to 20 displacements per atom (dpa) at 600 nm depth at 420 and 440 °C, and to 1, 10 and 20 dpa at 600 nm depth at 470 °C using 5 MeV Fe++ ions. The characterization was conducted using ChemiSTEM and Atom Probe Tomography (APT), with a focus on radiation induced segregation and precipitation. Ni and/or Si segregation at defect sinks (grain boundaries, dislocation lines, carbide/matrix interfaces) together with Ni, Si, Mn rich G-phase precipitation were observed in self-ion irradiated HT9 except in very low dose case (1 dpa at 470more » °C). Some G-phase precipitates were found to nucleate heterogeneously at defect sinks where Ni and/or Si segregated. In contrast to what was previously reported in the literature for neutron irradiated HT9, no Cr-rich α' phase, χ-phases, η phase and voids were found in self-ion irradiated HT9. The difference of observed microstructures is probably due to the difference of irradiation dose rate between ion irradiation and neutron irradiation. In addition, the average size and number density of G-phase precipitates were found to be sensitive to both irradiation temperature and dose. With the same irradiation dose, the average size of G-phase increased whereas the number density decreased with increasing irradiation temperature. Within the same irradiation temperature, the average size increased with increasing irradiation dose.« less

  4. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  5. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  6. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  7. Using ring width correlations to study the effects of plantation density on wood density and anatomical properties of red pine (Pinus resinosa Ait.)

    Treesearch

    J. Y. Zhu; C. T. Scott; K. L. Scallon; G. C. Myers

    2006-01-01

    This study demonstrated that average ring width (or average annual radial growth rate) is a reliable parameter to quantify the effects of tree plantation ndensity (growth suppression) on wood density and tracheid anatomical properties. The average ring width successfully correlated wood density and tracheid anatomical properties of red pines (Pinus resinosa Ait.) from...

  8. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  9. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  10. Irreversible reactions and diffusive escape: Stationary properties

    DOE PAGES

    Krapivsky, Paul L.; Ben-Naim, Eli

    2015-05-01

    We study three basic diffusion-controlled reaction processes—annihilation, coalescence, and aggregation. We examine the evolution starting with the most natural inhomogeneous initial configuration where a half-line is uniformly filled by particles, while the complementary half-line is empty. We show that the total number of particles that infiltrate the initially empty half-line is finite and has a stationary distribution. We determine the evolution of the average density from which we derive the average total number N of particles in the initially empty half-line; e.g. for annihilationmore » $$\\langle N\\rangle = \\frac{3}{16}+\\frac{1}{4\\π}$$ . For the coalescence process, we devise a procedure that in principle allows one to compute P(N), the probability to find exactly N particles in the initially empty half-line; we complete the calculations in the first non-trivial case (N = 1). As a by-product we derive the distance distribution between the two leading particles.« less

  11. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  12. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  13. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  14. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  15. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  16. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  17. Communicating the risk of injury in schoolboy rugby: using Poisson probability as an alternative presentation of the epidemiology.

    PubMed

    Parekh, Nikesh; Hodges, Stewart D; Pollock, Allyson M; Kirkwood, Graham

    2012-06-01

    The communication of injury risk in rugby and other sports is underdeveloped and parents, children and coaches need to be better informed about risk. A Poisson distribution was used to transform population based incidence of injury into average probabilities of injury to individual players. The incidence of injury in schoolboy rugby matches range from 7 to 129.8 injuries per 1000 player-hours; these rates translate to average probabilities of injury to a player of between 12% and 90% over a season. Incidence of injury and average probabilities of injury over a season should be published together in all future epidemiological studies on school rugby and other sports. More research is required on informing and communicating injury risks to parents, staff and children and how it affects monitoring, decision making and prevention strategies.

  18. Similarity-based distortion of visual short-term memory is due to perceptual averaging.

    PubMed

    Dubé, Chad; Zhou, Feng; Kahana, Michael J; Sekuler, Robert

    2014-03-01

    A task-irrelevant stimulus can distort recall from visual short-term memory (VSTM). Specifically, reproduction of a task-relevant memory item is biased in the direction of the irrelevant memory item (Huang & Sekuler, 2010a). The present study addresses the hypothesis that such effects reflect the influence of neural averaging under conditions of uncertainty about the contents of VSTM (Alvarez, 2011; Ball & Sekuler, 1980). We manipulated subjects' attention to relevant and irrelevant study items whose similarity relationships were held constant, while varying how similar the study items were to a subsequent recognition probe. On each trial, subjects were shown one or two Gabor patches, followed by the probe; their task was to indicate whether the probe matched one of the study items. A brief cue told subjects which Gabor, first or second, would serve as that trial's target item. Critically, this cue appeared either before, between, or after the study items. A distributional analysis of the resulting mnemometric functions showed an inflation in probability density in the region spanning the spatial frequency of the average of the two memory items. This effect, due to an elevation in false alarms to probes matching the perceptual average, was diminished when cues were presented before both study items. These results suggest that (a) perceptual averages are computed obligatorily and (b) perceptual averages are relied upon to a greater extent when item representations are weakened. Implications of these results for theories of VSTM are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Systematic influences of gamma-ray spectrometry data near the decision threshold for radioactivity measurements in the environment.

    PubMed

    Zorko, Benjamin; Korun, Matjaž; Mora Canadas, Juan Carlos; Nicoulaud-Gouin, Valerie; Chyly, Pavol; Blixt Buhr, Anna Maria; Lager, Charlotte; Aquilonius, Karin; Krajewski, Pawel

    2016-07-01

    Several methods for reporting outcomes of gamma-ray spectrometric measurements of environmental samples for dose calculations are presented and discussed. The measurement outcomes can be reported as primary measurement results, primary measurement results modified according to the quantification limit, best estimates obtained by the Bayesian posterior (ISO 11929), best estimates obtained by the probability density distribution resembling shifting, and the procedure recommended by the European Commission (EC). The annual dose is calculated from the arithmetic average using any of these five procedures. It was shown that the primary measurement results modified according to the quantification limit could lead to an underestimation of the annual dose. On the other hand the best estimates lead to an overestimation of the annual dose. The annual doses calculated from the measurement outcomes obtained according to the EC's recommended procedure, which does not cope with the uncertainties, fluctuate between an under- and overestimation, depending on the frequency of the measurement results that are larger than the limit of detection. In the extreme case, when no measurement results above the detection limit occur, the average over primary measurement results modified according to the quantification limit underestimates the average over primary measurement results for about 80%. The average over best estimates calculated according the procedure resembling shifting overestimates the average over primary measurement results for 35%, the average obtained by the Bayesian posterior for 85% and the treatment according to the EC recommendation for 89%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Early Fluid and Protein Shifts in Men During Water Immersion

    NASA Technical Reports Server (NTRS)

    Hinghofer-Szalkay, H.; Harrison, M. H.; Greenleaf, J. E.

    1987-01-01

    High precision blood and plasma densitometry was used to measure transvascular fluid shifts during water immersion to the neck. Six men (28-49 years) undertook 30 min of standing immersion in water at 35.0 +/- 0.2 C; immersion was preceded by 30 min control standing in air at 28 +/- 1 C. Blood was sampled from an antecubital catheter for determination of Blood Density (BD), Plasma Density (PD), Haematocrit (Ht), total Plasma Protein Concentration (PPC), and Plasma Albumin Concentration (PAC). Compared to control, significant decreases (p less than 0.01) in all these measures were observed after 20 min immersion. At 30 min, plasma volume had increased by 11.0 +/- 2.8%; the average density of the fluid shifted from extravascular fluid into the vascular compartment was 1006.3 g/l; albumin moved with the fluid and its albumin concentration was about one-third of the plasma protein concentration during early immersion. These calculations are based on the assumption that the F-cell ratio remained unchanged. No changes in erythrocyte water content during immersion were found. Thus, immersion-induced haemodilution is probably accompanied by protein (mainly albumin) augmentation which accompanies the intra-vascular fluid shift.

  1. Hunting high and low: disentangling primordial and late-time non-Gaussianity with cosmic densities in spheres

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.

    2018-03-01

    Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.

  2. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  3. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  4. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  5. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  6. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  7. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  8. Favre-Averaged Turbulence Statistics in Variable Density Mixing of Buoyant Jets

    NASA Astrophysics Data System (ADS)

    Charonko, John; Prestridge, Kathy

    2014-11-01

    Variable density mixing of a heavy fluid jet with lower density ambient fluid in a subsonic wind tunnel was experimentally studied using Particle Image Velocimetry and Planar Laser Induced Fluorescence to simultaneously measure velocity and density. Flows involving the mixing of fluids with large density ratios are important in a range of physical problems including atmospheric and oceanic flows, industrial processes, and inertial confinement fusion. Here we focus on buoyant jets with coflow. Results from two different Atwood numbers, 0.1 (Boussinesq limit) and 0.6 (non-Boussinesq case), reveal that buoyancy is important for most of the turbulent quantities measured. Statistical characteristics of the mixing important for modeling these flows such as the PDFs of density and density gradients, turbulent kinetic energy, Favre averaged Reynolds stress, turbulent mass flux velocity, density-specific volume correlation, and density power spectra were also examined and compared with previous direct numerical simulations. Additionally, a method for directly estimating Reynolds-averaged velocity statistics on a per-pixel basis is extended to Favre-averages, yielding improved accuracy and spatial resolution as compared to traditional post-processing of velocity and density fields.

  9. Evaluation of a pretest scoring system (4Ts) for the diagnosis of heparin-induced thrombocytopenia in a university hospital setting.

    PubMed

    Vatanparast, Rodina; Lantz, Sarah; Ward, Kristine; Crilley, Pamela Ann; Styler, Michael

    2012-11-01

    The initial diagnosis of heparin-induced thrombocytopenia (HIT) is made on clinical grounds because the assays with the highest sensitivity (eg, heparin-platelet factor 4 antibody enzyme-linked immunosorbent assay [ELISA]) and specificity (eg, serotonin release assay) may not be readily available. The clinical utility of the pretest scoring system, the 4Ts, was developed and validated by Lo et al in the Journal of Thrombosis and Haemostasis in 2006. The pretest scoring system looks at the degree and timing of thrombocytopenia, thrombosis, and the possibility of other etiologies. Based on the 4T score, patients can be categorized as having a high, intermediate, or low probability of having HIT. We conducted a retrospective study of 100 consecutive patients who were tested for HIT during their hospitalization at Hahnemann University Hospital (Philadelphia, PA) in 2009. Of the 100 patients analyzed, 72, 23, and 5 patients had 4T pretest probability scores of low, intermediate, and high, respectively. A positive HIT ELISA (optical density > 1.0 unit) was detected in 0 of 72 patients (0%) in the low probability group, in 5 of 23 patients (22%) in the intermediate probability group, and in 2 of 5 patients (40%) in the high probability group. The average turnaround time for the HIT ELISA was 4 to 5 days. Fourteen (19%) of the 72 patients with a low pretest probability of HIT were treated with a direct thrombin inhibitor. Ten (71%) of the 14 patients in the low probability group treated with a direct thrombin inhibitor had a major complication of bleeding requiring blood transfusion support. In this retrospective study, a low 4T score showed 100% correlation with a negative HIT antibody assay. We recommend incorporating the 4T scoring system into institutional core measures when assessing a patient with suspected HIT, selecting only patients with intermediate to high probability for therapeutic intervention, which may translate into reduced morbidity and lower health care costs.

  10. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.

    2011-04-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.

  11. Effect of H2 binding on the nonadiabatic transition probability between singlet and triplet states of the [NiFe]-hydrogenase active site.

    PubMed

    Kaliakin, Danil S; Zaari, Ryan R; Varganov, Sergey A

    2015-02-12

    We investigate the effect of H2 binding on the spin-forbidden nonadiabatic transition probability between the lowest energy singlet and triplet electronic states of [NiFe]-hydrogenase active site model, using a velocity averaged Landau-Zener theory. Density functional and multireference perturbation theories were used to provide parameters for the Landau-Zener calculations. It was found that variation of the torsion angle between the terminal thiolate ligands around the Ni center induces an intersystem crossing between the lowest energy singlet and triplet electronic states in the bare active site and in the active site with bound H2. Potential energy curves between the singlet and triplet minima along the torsion angle and H2 binding energies to the two spin states were calculated. Upon H2 binding to the active site, there is a decrease in the torsion angle at the minimum energy crossing point between the singlet and triplet states. The probability of nonadiabatic transitions at temperatures between 270 and 370 K ranges from 35% to 32% for the active site with bound H2 and from 42% to 38% for the bare active site, thus indicating the importance of spin-forbidden nonadiabatic pathways for H2 binding on the [NiFe]-hydrogenase active site.

  12. Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix

    NASA Astrophysics Data System (ADS)

    Charnotskii, Mikhail

    2017-08-01

    Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.

  13. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  14. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  15. Surface slip during large Owens Valley earthquakes

    USGS Publications Warehouse

    Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  16. Multispecies reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, A.; Fatollahi, A. H.; Khorrami, M.; Shariati, A.

    2000-10-01

    Multispecies reaction-diffusion systems, for which the time evolution equations of correlation functions become a closed set, are considered. A formal solution for the average densities is found. Some special interactions and the exact time dependence of the average densities in these cases are also studied. For the general case, the large-time behavior of the average densities has also been obtained.

  17. Uncovering hidden variation in polyploid wheat.

    PubMed

    Krasileva, Ksenia V; Vasquez-Gross, Hans A; Howell, Tyson; Bailey, Paul; Paraiso, Francine; Clissold, Leah; Simmonds, James; Ramirez-Gonzalez, Ricardo H; Wang, Xiaodong; Borrill, Philippa; Fosker, Christine; Ayling, Sarah; Phillips, Andrew L; Uauy, Cristobal; Dubcovsky, Jorge

    2017-02-07

    Comprehensive reverse genetic resources, which have been key to understanding gene function in diploid model organisms, are missing in many polyploid crops. Young polyploid species such as wheat, which was domesticated less than 10,000 y ago, have high levels of sequence identity among subgenomes that mask the effects of recessive alleles. Such redundancy reduces the probability of selection of favorable mutations during natural or human selection, but also allows wheat to tolerate high densities of induced mutations. Here we exploited this property to sequence and catalog more than 10 million mutations in the protein-coding regions of 2,735 mutant lines of tetraploid and hexaploid wheat. We detected, on average, 2,705 and 5,351 mutations per tetraploid and hexaploid line, respectively, which resulted in 35-40 mutations per kb in each population. With these mutation densities, we identified an average of 23-24 missense and truncation alleles per gene, with at least one truncation or deleterious missense mutation in more than 90% of the captured wheat genes per population. This public collection of mutant seed stocks and sequence data enables rapid identification of mutations in the different copies of the wheat genes, which can be combined to uncover previously hidden variation. Polyploidy is a central phenomenon in plant evolution, and many crop species have undergone recent genome duplication events. Therefore, the general strategy and methods developed herein can benefit other polyploid crops.

  18. 77 FR 4014 - Takes of Marine Mammals Incidental to Specified Activities; Physical Oceanographic Studies in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... chambers for a total discharge volume of 210 in\\3\\) with a 1,200 m long hydrophone streamer. GI guns will... require the Navy to use species-specific mean maximum densities, rather than the mean average densities... use mean maximum densities, rather than mean average densities. Marine mammal population density...

  19. Diffusion of finite-sized hard-core interacting particles in a one-dimensional box: Tagged particle dynamics.

    PubMed

    Lizana, L; Ambjörnsson, T

    2009-11-01

    We solve a nonequilibrium statistical-mechanics problem exactly, namely, the single-file dynamics of N hard-core interacting particles (the particles cannot pass each other) of size Delta diffusing in a one-dimensional system of finite length L with reflecting boundaries at the ends. We obtain an exact expression for the conditional probability density function rhoT(yT,t|yT,0) that a tagged particle T (T=1,...,N) is at position yT at time t given that it at time t=0 was at position yT,0. Using a Bethe ansatz we obtain the N -particle probability density function and, by integrating out the coordinates (and averaging over initial positions) of all particles but particle T , we arrive at an exact expression for rhoT(yT,t|yT,0) in terms of Jacobi polynomials or hypergeometric functions. Going beyond previous studies, we consider the asymptotic limit of large N , maintaining L finite, using a nonstandard asymptotic technique. We derive an exact expression for rhoT(yT,t|yT,0) for a tagged particle located roughly in the middle of the system, from which we find that there are three time regimes of interest for finite-sized systems: (A) for times much smaller than the collision time ttaucoll but times smaller than the equilibrium time ttaue , rhoT(yT,t|yT,0) approaches a polynomial-type equilibrium probability density function. Notably, only regimes (A) and (B) are found in the previously considered infinite systems.

  20. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  1. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  2. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  3. Regional statistics in confined two-dimensional decaying turbulence.

    PubMed

    Házi, Gábor; Tóth, Gábor

    2011-06-28

    Two-dimensional decaying turbulence in a square container has been simulated using the lattice Boltzmann method. The probability density function (PDF) of the vorticity and the particle distribution functions have been determined at various regions of the domain. It is shown that, after the initial stage of decay, the regional area averaged enstrophy fluctuates strongly around a mean value in time. The ratio of the regional mean and the overall enstrophies increases monotonously with increasing distance from the wall. This function shows a similar shape to the axial mean velocity profile of turbulent channel flows. The PDF of the vorticity peaks at zero and is nearly symmetric considering the statistics in the overall domain. Approaching the wall, the PDFs become skewed owing to the boundary layer.

  4. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).

  5. A Comparative Study of Automated Infrasound Detectors - PMCC and AFD with Analyst Review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Junghyun; Hayward, Chris; Zeiler, Cleat

    Automated detections calculated by the progressive multi-channel correlation (PMCC) method (Cansi, 1995) and the adaptive F detector (AFD) (Arrowsmith et al., 2009) are compared to the signals identified by five independent analysts. Each detector was applied to a four-hour time sequence recorded by the Korean infrasound array CHNAR. This array was used because it is composed of both small (<100 m) and large (~1000 m) aperture element spacing. The four hour time sequence contained a number of easily identified signals under noise conditions that have average RMS amplitudes varied from 1.2 to 4.5 mPa (1 to 5 Hz), estimated withmore » running five-minute window. The effectiveness of the detectors was estimated for the small aperture, large aperture, small aperture combined with the large aperture, and full array. The full and combined arrays performed the best for AFD under all noise conditions while the large aperture array had the poorest performance for both detectors. PMCC produced similar results as AFD under the lower noise conditions, but did not produce as dramatic an increase in detections using the full and combined arrays. Both automated detectors and the analysts produced a decrease in detections under the higher noise conditions. Comparing the detection probabilities with Estimated Receiver Operating Characteristic (EROC) curves we found that the smaller value of consistency for PMCC and the larger p-value for AFD had the highest detection probability. These parameters produced greater changes in detection probability than estimates of the false alarm rate. The detection probability was impacted the most by noise level, with low noise (average RMS amplitude of 1.7 mPa) having an average detection probability of ~40% and high noise (average RMS amplitude of 2.9 mPa) average detection probability of ~23%.« less

  6. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  7. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  8. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  9. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  10. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  11. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  12. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  13. Capacity and optimal collusion attack channels for Gaussian fingerprinting games

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Moulin, Pierre

    2007-02-01

    In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.

  14. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  15. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  16. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  18. Premixed-Gas Flame Propagation in Hele-Shaw Cells

    NASA Technical Reports Server (NTRS)

    Sharif, J.; Abid, M.; Ronney, P. D.

    1999-01-01

    It is well known that buoyancy and thermal expansion affect the propagation ra and shapes of premixed gas flames. The understanding of such effects is complicated by the large density ratio between the reactants and products, which induces a baroclinic production of vorticity due to misalignment of density and pressure gradients at the front, which in turn leads to a complicated multi-dimensional flame/flow interaction. The Hele-Shaw cell, i.e., the region between closely-spaced flat parallel plates, is probably the simplest system in which multi-dimensional convection is presents consequently, the behavior of fluids in this system has been studied extensively (Homsy, 1987). Probably the most important characteristic of Hele-Shaw flows is that when the Reynolds number based on gap width is sufficiently small, the Navier-Stokes equations averaged over the gap reduce to a linear relation, namely a Laplace equation for pressure (Darcy's law). In this work, flame propagation in Hele-Shaw cells is studied to obtain a better understanding of buoyancy and thermal expansion effects on premixed flames. This work is also relevant to the study of unburned hydrocarbon emissions produced by internal combustion engines since these emissions are largely a result of the partial burning or complete flame quenching in the narrow, annular gap called the "crevice volume" between the piston and cylinder walls (Heywood, 1988). A better understanding of how flames propagate in these volumes through experiments using Hele-Shaw cells could lead to identification of means to reduce these emissions.

  19. Predicting structures in the Zone of Avoidance

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Colless, Matthew; Kraan-Korteweg, Renée C.; Gottlöber, Stefan

    2017-11-01

    The Zone of Avoidance (ZOA), whose emptiness is an artefact of our Galaxy dust, has been challenging observers as well as theorists for many years. Multiple attempts have been made on the observational side to map this region in order to better understand the local flows. On the theoretical side, however, this region is often simply statistically populated with structures but no real attempt has been made to confront theoretical and observed matter distributions. This paper takes a step forward using constrained realizations (CRs) of the local Universe shown to be perfect substitutes of local Universe-like simulations for smoothed high-density peak studies. Far from generating completely `random' structures in the ZOA, the reconstruction technique arranges matter according to the surrounding environment of this region. More precisely, the mean distributions of structures in a series of constrained and random realizations (RRs) differ: while densities annihilate each other when averaging over 200 RRs, structures persist when summing 200 CRs. The probability distribution function of ZOA grid cells to be highly overdense is a Gaussian with a 15 per cent mean in the random case, while that of the constrained case exhibits large tails. This implies that areas with the largest probabilities host most likely a structure. Comparisons between these predictions and observations, like those of the Puppis 3 cluster, show a remarkable agreement and allow us to assert the presence of the, recently highlighted by observations, Vela supercluster at about 180 h-1 Mpc, right behind the thickest dust layers of our Galaxy.

  20. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    NASA Astrophysics Data System (ADS)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  1. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  2. Reduced densities of the invasive wasp, Vespula vulgaris (Hymenoptera: Vespidae), did not alter the invertebrate community composition of Nothofagus forests in New Zealand.

    PubMed

    Duthie, Catherine; Lester, Philip J

    2013-04-01

    Invasive common wasps (Vespula vulgaris L.) are predators of invertebrates in Nothofagus forests of New Zealand. We reduced wasp densities by poisoning in three sites over three y. We predicted an increase in the number of invertebrates and a change in the community composition in sites where wasps were poisoned (wasps removed) relative to nearby sites where wasps were not poisoned (wasps maintained). Wasp densities were significantly reduced by an average of 58.9% by poisoning. Despite this reduction in wasp densities, native bush ants (Prolasius advenus Forel) were the only taxa that was significantly influenced by wasp removal. However, contrary to our predictions there were more ants caught in pitfall traps where wasps were maintained. We believe that the higher abundance of these ants is probably because of the scarcity of honeydew in wasp-maintained sites and compensatory foraging by ants in these areas. Otherwise, our results indicated no significant effects of reduced wasp densities on the total number of invertebrates, or the number of invertebrate families, observed in pitfall or Malaise traps. An analysis of community composition (permutational multivariate analysis of variance) also indicated no significant difference between wasp-removed or wasp-maintained communities. The most parsimonious explanation for our results is that although we significantly reduced wasp numbers, we may not have reduced numbers sufficiently or for a sufficiently long period, to see a change or recovery in the community.

  3. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  4. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  5. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  6. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  7. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  8. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  9. Body density differences between negro and caucasian professional football players

    PubMed Central

    Adams, J.; Bagnall, K. M.; McFadden, K. D.; Mottola, M.

    1981-01-01

    Other workers have shown that the bone density for the average negro is greater than for the average caucasian. This would lead to greater values of body density for the average negro but it is confused because the average negro has a different body form (and consequently different proportions of body components) compared with the average caucasian. This study of body density of a group of professional Canadian football players investigates whether or not to separate negroes from caucasians when considering the formation of regression equations for prediction of body density. Accordingly, a group of 7 negroes and 7 caucasians were matched somatotypically and a comparison was made of their body density values obtained using a hydrostatic weighing technique and a closed-circuit helium dilution technique for measuring lung volumes. The results show that if somatotype is taken into account then no significant difference in body density values is found between negro and caucasian professional football players. The players do not have to be placed in separate groups but it remains to be seen whether or not these results apply to general members of the population. ImagesFigure 1 PMID:7317724

  10. Effect of the target power density on high-power impulse magnetron sputtering of copper

    NASA Astrophysics Data System (ADS)

    Kozák, Tomáš

    2012-04-01

    We present a model analysis of high-power impulse magnetron sputtering of copper. We use a non-stationary global model based on the particle and energy conservation equations in two zones (the high density plasma ring above the target racetrack and the bulk plasma region), which makes it possible to calculate time evolutions of the averaged process gas and target material neutral and ion densities, as well as the fluxes of these particles to the target and substrate during a pulse period. We study the effect of the increasing target power density under conditions corresponding to a real experimental system. The calculated target current waveforms show a long steady state and are in good agreement with the experimental results. For an increasing target power density, an analysis of the particle densities shows a gradual transition to a metal dominated discharge plasma with an increasing degree of ionization of the depositing flux. The average fraction of target material ions in the total ion flux onto the substrate is more than 90% for average target power densities higher than 500 W cm-2 in a pulse. The average ionized fraction of target material atoms in the flux onto the substrate reaches 80% for a maximum average target power density of 3 kW cm-2 in a pulse.

  11. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  12. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  13. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  14. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    PubMed

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.

  15. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  16. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  17. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  18. Evidence for a Low Bulk Crustal Density for Mars from Gravity and Topography.

    PubMed

    Goossens, Sander; Sabaka, Terence J; Genova, Antonio; Mazarico, Erwan; Nicholas, Joseph B; Neumann, Gregory A

    2017-08-16

    Knowledge of the average density of the crust of a planet is important in determining its interior structure. The combination of high-resolution gravity and topography data has yielded a low density for the Moon's crust, yet for other terrestrial planets the resolution of the gravity field models has hampered reasonable estimates. By using well-chosen constraints derived from topography during gravity field model determination using satellite tracking data, we show that we can robustly and independently determine the average bulk crustal density directly from the tracking data, using the admittance between topography and imperfect gravity. We find a low average bulk crustal density for Mars, 2582 ± 209 kg m -3 . This bulk crustal density is lower than that assumed until now. Densities for volcanic complexes are higher, consistent with earlier estimates, implying large lateral variations in crustal density. In addition, we find indications that the crustal density increases with depth.

  19. Three ages of Venus

    NASA Technical Reports Server (NTRS)

    Wood, Charles A.; Coombs, Cassandra R.

    1989-01-01

    A central question for any planet is the age of its surface. Based on comparative planetological arguments, Venus should be as young and active as the Earth (Wood and Francis). The detection of probable impact craters in the Venera radar images provides a tool for estimating the age of the surface of Venus. Assuming somewhat different crater production rates, Bazilevskiy et al. derived an age of 1 + or - 0.5 billion years, and Schaber et al. and Wood and Francis estimated an age of 200 to 400 million years. The known impact craters are not randomly distributed, however, thus some area must be older and others younger than this average age. Ages were derived for major geologic units on Venus using the Soviet catalog of impact craters (Bazilevskiy et al.), and the most accessible geologic unit map (Bazilevskiy). The crater counts are presented for (diameters greater than 20 km), areas, and crater densities for the 7 terrain units and coronae. The procedure for examining the distribution of craters is superior to the purely statistical approaches of Bazilevskiy et al. and Plaut and Arvidson because the bins are larger (average size 16 x 10(6) sq km) and geologically significant. Crater densities define three distinct groups: relatively heavily cratered (Lakshmi, mountain belts), moderately cratered (smooth and rolling plains, ridge belts, and tesserae), and essentially uncratered (coronae and domed uplands). Following Schaber et al., Grieve's terrestrial cratering rate of 5.4 + or - 2.7 craters greater than 20 km/10(9) yrs/10(6) sq km was used to calculate ages for the geologic units on Venus. To improve statistics, the data was aggregated into the three crater density groups, deriving the ages. For convenience, the three similar age groups are given informal time stratigraphic unit names, from youngest to oldest: Ulfrunian, Sednaian, Lakshmian.

  20. Three ages of Venus

    NASA Astrophysics Data System (ADS)

    Wood, Charles A.; Coombs, Cassandra R.

    A central question for any planet is the age of its surface. Based on comparative planetological arguments, Venus should be as young and active as the Earth (Wood and Francis). The detection of probable impact craters in the Venera radar images provides a tool for estimating the age of the surface of Venus. Assuming somewhat different crater production rates, Bazilevskiy et al. derived an age of 1 + or - 0.5 billion years, and Schaber et al. and Wood and Francis estimated an age of 200 to 400 million years. The known impact craters are not randomly distributed, however, thus some area must be older and others younger than this average age. Ages were derived for major geologic units on Venus using the Soviet catalog of impact craters (Bazilevskiy et al.), and the most accessible geologic unit map (Bazilevskiy). The crater counts are presented for (diameters greater than 20 km), areas, and crater densities for the 7 terrain units and coronae. The procedure for examining the distribution of craters is superior to the purely statistical approaches of Bazilevskiy et al. and Plaut and Arvidson because the bins are larger (average size 16 x 10(6) sq km) and geologically significant. Crater densities define three distinct groups: relatively heavily cratered (Lakshmi, mountain belts), moderately cratered (smooth and rolling plains, ridge belts, and tesserae), and essentially uncratered (coronae and domed uplands). Following Schaber et al., Grieve's terrestrial cratering rate of 5.4 + or - 2.7 craters greater than 20 km/10(9) yrs/10(6) sq km was used to calculate ages for the geologic units on Venus. To improve statistics, the data was aggregated into the three crater density groups, deriving the ages. For convenience, the three similar age groups are given informal time stratigraphic unit names, from youngest to oldest: Ulfrunian, Sednaian, Lakshmian.

  1. Power spectral density of a single Brownian trajectory: what one can and cannot learn from it

    NASA Astrophysics Data System (ADS)

    Krapf, Diego; Marinari, Enzo; Metzler, Ralf; Oshanin, Gleb; Xu, Xinran; Squarcini, Alessio

    2018-02-01

    The power spectral density (PSD) of any time-dependent stochastic process X t is a meaningful feature of its spectral content. In its text-book definition, the PSD is the Fourier transform of the covariance function of X t over an infinitely large observation time T, that is, it is defined as an ensemble-averaged property taken in the limit T\\to ∞ . A legitimate question is what information on the PSD can be reliably obtained from single-trajectory experiments, if one goes beyond the standard definition and analyzes the PSD of a single trajectory recorded for a finite observation time T. In quest for this answer, for a d-dimensional Brownian motion (BM) we calculate the probability density function of a single-trajectory PSD for arbitrary frequency f, finite observation time T and arbitrary number k of projections of the trajectory on different axes. We show analytically that the scaling exponent for the frequency-dependence of the PSD specific to an ensemble of BM trajectories can be already obtained from a single trajectory, while the numerical amplitude in the relation between the ensemble-averaged and single-trajectory PSDs is a fluctuating property which varies from realization to realization. The distribution of this amplitude is calculated exactly and is discussed in detail. Our results are confirmed by numerical simulations and single-particle tracking experiments, with remarkably good agreement. In addition we consider a truncated Wiener representation of BM, and the case of a discrete-time lattice random walk. We highlight some differences in the behavior of a single-trajectory PSD for BM and for the two latter situations. The framework developed herein will allow for meaningful physical analysis of experimental stochastic trajectories.

  2. Assessment of Density Variations of Marine Sediments with Ocean and Sediment Depths

    PubMed Central

    Tenzer, R.; Gladkikh, V.

    2014-01-01

    We analyze the density distribution of marine sediments using density samples taken from 716 drill sites of the Deep Sea Drilling Project (DSDP). The samples taken within the upper stratigraphic layer exhibit a prevailing trend of the decreasing density with the increasing ocean depth (at a rate of −0.05 g/cm3 per 1 km). Our results confirm findings of published studies that the density nonlinearly increases with the increasing sediment depth due to compaction. We further establish a 3D density model of marine sediments and propose theoretical models of the ocean-sediment and sediment-bedrock density contrasts. The sediment density-depth equation approximates density samples with an average uncertainty of about 10% and better represents the density distribution especially at deeper sections of basin sediments than a uniform density model. The analysis of DSDP density data also reveals that the average density of marine sediments is 1.70 g/cm3 and the average density of the ocean bedrock is 2.9 g/cm3. PMID:24744686

  3. Atomic density effects on temperature characteristics and thermal transport at grain boundaries through a proper bin size selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vo, Truong Quoc; Kim, BoHung, E-mail: muratbarisik@iyte.edu.tr, E-mail: bohungk@ulsan.ac.kr; Barisik, Murat, E-mail: muratbarisik@iyte.edu.tr, E-mail: bohungk@ulsan.ac.kr

    2016-05-21

    This study focuses on the proper characterization of temperature profiles across grain boundaries (GBs) in order to calculate the correct interfacial thermal resistance (ITR) and reveal the influence of GB geometries onto thermal transport. The solid-solid interfaces resulting from the orientation difference between the (001), (011), and (111) copper surfaces were investigated. Temperature discontinuities were observed at the boundary of grains due to the phonon mismatch, phonon backscattering, and atomic forces between dissimilar structures at the GBs. We observed that the temperature decreases gradually in the GB area rather than a sharp drop at the interface. As a result, threemore » distinct temperature gradients developed at the GB which were different than the one observed in the bulk solid. This behavior extends a couple molecular diameters into both sides of the interface where we defined a thickness at GB based on the measured temperature profiles for characterization. Results showed dependence on the selection of the bin size used to average the temperature data from the molecular dynamics system. The bin size on the order of the crystal layer spacing was found to present an accurate temperature profile through the GB. We further calculated the GB thickness of various cases by using potential energy (PE) distributions which showed agreement with direct measurements from the temperature profile and validated the proper binning. The variation of grain crystal orientation developed different molecular densities which were characterized by the average atomic surface density (ASD) definition. Our results revealed that the ASD is the primary factor affecting the structural disorders and heat transfer at the solid-solid interfaces. Using a system in which the planes are highly close-packed can enhance the probability of interactions and the degree of overlap between vibrational density of states (VDOS) of atoms forming at interfaces, leading to a reduced ITR. Thus, an accurate understanding of thermal characteristics at the GB can be formulated by selecting a proper bin size.« less

  4. Atomic density effects on temperature characteristics and thermal transport at grain boundaries through a proper bin size selection

    NASA Astrophysics Data System (ADS)

    Vo, Truong Quoc; Barisik, Murat; Kim, BoHung

    2016-05-01

    This study focuses on the proper characterization of temperature profiles across grain boundaries (GBs) in order to calculate the correct interfacial thermal resistance (ITR) and reveal the influence of GB geometries onto thermal transport. The solid-solid interfaces resulting from the orientation difference between the (001), (011), and (111) copper surfaces were investigated. Temperature discontinuities were observed at the boundary of grains due to the phonon mismatch, phonon backscattering, and atomic forces between dissimilar structures at the GBs. We observed that the temperature decreases gradually in the GB area rather than a sharp drop at the interface. As a result, three distinct temperature gradients developed at the GB which were different than the one observed in the bulk solid. This behavior extends a couple molecular diameters into both sides of the interface where we defined a thickness at GB based on the measured temperature profiles for characterization. Results showed dependence on the selection of the bin size used to average the temperature data from the molecular dynamics system. The bin size on the order of the crystal layer spacing was found to present an accurate temperature profile through the GB. We further calculated the GB thickness of various cases by using potential energy (PE) distributions which showed agreement with direct measurements from the temperature profile and validated the proper binning. The variation of grain crystal orientation developed different molecular densities which were characterized by the average atomic surface density (ASD) definition. Our results revealed that the ASD is the primary factor affecting the structural disorders and heat transfer at the solid-solid interfaces. Using a system in which the planes are highly close-packed can enhance the probability of interactions and the degree of overlap between vibrational density of states (VDOS) of atoms forming at interfaces, leading to a reduced ITR. Thus, an accurate understanding of thermal characteristics at the GB can be formulated by selecting a proper bin size.

  5. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  6. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  7. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  8. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  9. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  10. Bats adjust their pulse emission rates with swarm size in the field.

    PubMed

    Lin, Yuan; Abaid, Nicole; Müller, Rolf

    2016-12-01

    Flying in swarms, e.g., when exiting a cave, could pose a problem to bats that use an active biosonar system because the animals could risk jamming each other's biosonar signals. Studies from current literature have found different results with regard to whether bats reduce or increase emission rate in the presence of jamming ultrasound. In the present work, the number of Eastern bent-wing bats (Miniopterus fuliginosus) that were flying inside a cave during emergence was estimated along with the number of signal pulses recorded. Over the range of average bat numbers present in the recording (0 to 14 bats), the average number of detected pulses per bat increased with the average number of bats. The result was interpreted as an indication that the Eastern bent-wing bats increased their emission rate and/or pulse amplitude with swarm size on average. This finding could be explained by the hypothesis that the bats might not suffer from substantial jamming probabilities under the observed density regimes, so jamming might not have been a limiting factor for their emissions. When jamming did occur, the bats could avoid it through changing the pulse amplitude and other pulse properties such as duration or frequency, which has been suggested by other studies. More importantly, the increased biosonar activities may have addressed a collision-avoidance challenge that was posed by the increased swarm size.

  11. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  12. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  13. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  14. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  15. On strain and stress in living cells

    NASA Astrophysics Data System (ADS)

    Cox, Brian N.; Smith, David W.

    2014-11-01

    Recent theoretical simulations of amelogenesis and network formation and new, simple analyses of the basic multicellular unit (BMU) allow estimation of the order of magnitude of the strain energy density in populations of living cells in their natural environment. A similar simple calculation translates recent measurements of the force-displacement relation for contacting cells (cell-cell adhesion energy) into equivalent volume energy densities, which are formed by averaging the changes in contact energy caused by a cell's migration over the cell's volume. The rates of change of these mechanical energy densities (energy density rates) are then compared to the order of magnitude of the metabolic activity of a cell, expressed as a rate of production of metabolic energy per unit volume. The mechanical energy density rates are 4-5 orders of magnitude smaller than the metabolic energy density rate in amelogenesis or bone remodeling in the BMU, which involve modest cell migration velocities, and 2-3 orders of magnitude smaller for innervation of the gut or angiogenesis, where migration rates are among the highest for all cell types. For representative cell-cell adhesion gradients, the mechanical energy density rate is 6 orders of magnitude smaller than the metabolic energy density rate. The results call into question the validity of using simple constitutive laws to represent living cells. They also imply that cells need not migrate as inanimate objects of gradients in an energy field, but are better regarded as self-powered automata that may elect to be guided by such gradients or move otherwise. Thus Ġel=d/dt 1/2 >[(C11+C12)ɛ02+2μγ02]=(C11+C12)ɛ0ɛ˙0+2μγ0γ˙0 or Ġel=ηEɛ0ɛ˙0+η‧Eγ0γ˙0 with 1.4≤η≤3.4 and 0.7≤η‧≤0.8 for Poisson's ratio in the range 0.2≤ν≤0.4 and η=1.95 and η‧=0.75 for ν=0.3. The spatial distribution of shear strains arising within an individual cell as cells slide past one another during amelogenesis is not known in detail. However, estimates can be inferred from the known relative velocities of the cells' centers of mass. When averaged over a volume comparable to the cell size, representative values of the strain are, to order of magnitude, ɛ0≈0.1 and γ0≈0.1. The shape distortions of cells seen, for example, in Fig. 1c, imply peak strains in minor segments of a cell of magnitude unity, ɛ0≈1 and γ0≈1; these values represent the upper bound of plausible values and are included for discussion of the extremes of attainable strain energy rates.Given the strain magnitudes, the strain rates follow from the fact that a cell switches from one contacting neighbor in the adjacent row to the next in approximately 0.25 d, during which motion the strains might vary from zero to their maximum values and back again. Thus the most probable shear strain rate is inferred to be γ˙0=10-6 s-1 and the most probable tensile strain rate is inferred to be ɛ˙0≈10-6 s-1, with high bounds γ˙0=10-5 s-1 and ɛ˙0=10-5 s-1.

  16. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  17. Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    van Lith, Janneke

    The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.

  18. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  19. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  20. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  1. Comparison of Aperture Averaging and Receiver Diversity Techniques for Free Space Optical Links in Presence of Turbulence and Various Weather Conditions

    NASA Astrophysics Data System (ADS)

    Kaur, Prabhmandeep; Jain, Virander Kumar; Kar, Subrat

    2014-12-01

    In this paper, we investigate the performance of a Free Space Optic (FSO) link considering the impairments caused by the presence of various weather conditions such as very clear air, drizzle, haze, fog, etc., and turbulence in the atmosphere. Analytic expression for the outage probability is derived using the gamma-gamma distribution for turbulence and accounting the effect of weather conditions using the Beer-Lambert's law. The effect of receiver diversity schemes using aperture averaging and array receivers on the outage probability is studied and compared. As the aperture diameter is increased, the outage probability decreases irrespective of the turbulence strength (weak, moderate and strong) and weather conditions. Similar effects are observed when the number of direct detection receivers in the array are increased. However, it is seen that as the desired level of performance in terms of the outage probability decreases, array receiver becomes the preferred choice as compared to the receiver with aperture averaging.

  2. Aerosol-type retrieval and uncertainty quantification from OMI data

    NASA Astrophysics Data System (ADS)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  3. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.

  4. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  5. Management decision making for fisher populations informed by occupancy modeling

    USGS Publications Warehouse

    Fuller, Angela K.; Linden, Daniel W.; Royle, J. Andrew

    2016-01-01

    Harvest data are often used by wildlife managers when setting harvest regulations for species because the data are regularly collected and do not require implementation of logistically and financially challenging studies to obtain the data. However, when harvest data are not available because an area had not previously supported a harvest season, alternative approaches are required to help inform management decision making. When distribution or density data are required across large areas, occupancy modeling is a useful approach, and under certain conditions, can be used as a surrogate for density. We collaborated with the New York State Department of Environmental Conservation (NYSDEC) to conduct a camera trapping study across a 70,096-km2 region of southern New York in areas that were currently open to fisher (Pekania [Martes] pennanti) harvest and those that had been closed to harvest for approximately 65 years. We used detection–nondetection data at 826 sites to model occupancy as a function of site-level landscape characteristics while accounting for sampling variation. Fisher occupancy was influenced positively by the proportion of conifer and mixed-wood forest within a 15-km2 grid cell and negatively associated with road density and the proportion of agriculture. Model-averaged predictions indicated high occupancy probabilities (>0.90) when road densities were low (<1 km/km2) and coniferous and mixed forest proportions were high (>0.50). Predicted occupancy ranged 0.41–0.67 in wildlife management units (WMUs) currently open to trapping, which could be used to guide a minimum occupancy threshold for opening new areas to trapping seasons. There were 5 WMUs that had been closed to trapping but had an average predicted occupancy of 0.52 (0.07 SE), and above the threshold of 0.41. These areas are currently under consideration by NYSDEC for opening a conservative harvest season. We demonstrate the use of occupancy modeling as an aid to management decision making when harvest-related data are unavailable and when budgetary constraints do not allow for capture–recapture studies to directly estimate density.

  6. Impact crater densities on volcanoes and coronae on venus: implications for volcanic resurfacing.

    PubMed

    Namiki, N; Solomon, S C

    1994-08-12

    The density of impact craters on large volcanoes on Venus is half the average crater density for the planet. The crater density on some classes of coronae is not significantly different from the global average density, but coronae with extensive associated volcanic deposits have lower crater densities. These results are inconsistent with both single-age and steady-state models for global resurfacing and suggest that volcanoes and coronae with associated volcanism have been active on Venus over the last 500 million years.

  7. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  8. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  9. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  10. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  11. Estimation and simulation of multi-beam sonar noise.

    PubMed

    Holmin, Arne Johannes; Korneliussen, Rolf J; Tjøstheim, Dag

    2016-02-01

    Methods for the estimation and modeling of noise present in multi-beam sonar data, including the magnitude, probability distribution, and spatial correlation of the noise, are developed. The methods consider individual acoustic samples and facilitate compensation of highly localized noise as well as subtraction of noise estimates averaged over time. The modeled noise is included in an existing multi-beam sonar simulation model [Holmin, Handegard, Korneliussen, and Tjøstheim, J. Acoust. Soc. Am. 132, 3720-3734 (2012)], resulting in an improved model that can be used to strengthen interpretation of data collected in situ at any signal to noise ratio. Two experiments, from the former study in which multi-beam sonar data of herring schools were simulated, are repeated with inclusion of noise. These experiments demonstrate (1) the potentially large effect of changes in fish orientation on the backscatter from a school, and (2) the estimation of behavioral characteristics such as the polarization and packing density of fish schools. The latter is achieved by comparing real data with simulated data for different polarizations and packing densities.

  12. The computer simulation of automobile use patterns for defining battery requirements for electric cars

    NASA Technical Reports Server (NTRS)

    Schwartz, H. J.

    1976-01-01

    A Monte Carlo simulation process was used to develop the U.S. daily range requirements for an electric vehicle from probability distributions of trip lengths and frequencies and average annual mileage data. The analysis shows that a car in the U.S. with a practical daily range of 82 miles (132 km) can meet the needs of the owner on 95% of the days of the year, or at all times other than his long vacation trips. Increasing the range of the vehicle beyond this point will not make it more useful to the owner because it will still not provide intercity transportation. A daily range of 82 miles can be provided by an intermediate battery technology level characterized by an energy density of 30 to 50 watt-hours per pound (66 to 110 W-hr/kg). Candidate batteries in this class are nickel-zinc, nickel-iron, and iron-air. The implication of these results for the research goals of far-term battery systems suggests a shift in emphasis toward lower cost and greater life and away from high energy density.

  13. Effect of shock waves on the statistics and scaling in compressible isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Wang, Jianchun; Wan, Minping; Chen, Song; Xie, Chenyue; Chen, Shiyi

    2018-04-01

    The statistics and scaling of compressible isotropic turbulence in the presence of large-scale shock waves are investigated by using numerical simulations at turbulent Mach number Mt ranging from 0.30 to 0.65. The spectra of the compressible velocity component, density, pressure, and temperature exhibit a k-2 scaling at different turbulent Mach numbers. The scaling exponents for structure functions of the compressible velocity component and thermodynamic variables are close to 1 at high orders n ≥3 . The probability density functions of increments of the compressible velocity component and thermodynamic variables exhibit a power-law region with the exponent -2 . Models for the conditional average of increments of the compressible velocity component and thermodynamic variables are developed based on the ideal shock relations and are verified by numerical simulations. The overall statistics of the compressible velocity component and thermodynamic variables are similar to one another at different turbulent Mach numbers. It is shown that the effect of shock waves on the compressible velocity spectrum and kinetic energy transfer is different from that of acoustic waves.

  14. Monitoring benthic foraminiferal dynamics at Bottsand coastal lagoon (western Baltic Sea)

    NASA Astrophysics Data System (ADS)

    Schönfeld, Joachim

    2018-04-01

    Benthic foraminifera from Bottsand coastal lagoon, western Baltic Sea, have been studied since the mid-1960s. They were monitored annually in late autumn since 2003 at the terminal ditch of the lagoon. There were 12 different species recognised, of which three have not been recorded during earlier investigations. Dominant species showed strong interannual fluctuations and a steady increase in population densities over the last decade. Elphidium incertum, a stenohaline species of the Baltic deep water fauna, colonised the Bottsand lagoon in 2016, most likely during a period of salinities >19 units and water temperatures of 18 °C on average in early autumn. The high salinities probably triggered their germination from a propagule bank in the ditch bottom sediment. The new E. incertum population showed densities higher by an order of magnitude than those of the indigenous species. The latter did not decline, revealing that E. incertum used another food source or occupied a different microhabitat. Elphidium incertum survived transient periods of lower salinities in late autumn 2017, though with reduced abundances, and became a regular faunal constituent at the Bottsand lagoon.

  15. Noise modeling and analysis of an IMU-based attitude sensor: improvement of performance by filtering and sensor fusion

    NASA Astrophysics Data System (ADS)

    K., Nirmal; A. G., Sreejith; Mathew, Joice; Sarpotdar, Mayuresh; Suresh, Ambily; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    We describe the characterization and removal of noises present in the Inertial Measurement Unit (IMU) MPU- 6050, which was initially used in an attitude sensor, and later used in the development of a pointing system for small balloon-borne astronomical payloads. We found that the performance of the IMU degraded with time because of the accumulation of different errors. Using Allan variance analysis method, we identified the different components of noise present in the IMU, and verified the results by the power spectral density analysis (PSD). We tried to remove the high-frequency noise using smooth filters such as moving average filter and then Savitzky Golay (SG) filter. Even though we managed to filter some high-frequency noise, these filters performance wasn't satisfactory for our application. We found the distribution of the random noise present in IMU using probability density analysis and identified that the noise in our IMU was white Gaussian in nature. Hence, we used a Kalman filter to remove the noise and which gave us good performance real time.

  16. A quantitative risk assessment model for Vibrio parahaemolyticus in raw oysters in Sao Paulo State, Brazil.

    PubMed

    Sobrinho, Paulo de S Costa; Destro, Maria T; Franco, Bernadette D G M; Landgraf, Mariza

    2014-06-16

    A risk assessment of Vibrio parahaemolyticus associated with raw oysters produced and consumed in São Paulo State was developed. The model was built according to the United States Food and Drug Administration framework for risk assessment. The outcome of the exposure assessment estimated the prevalence and density of pathogenic V. parahaemolyticus in raw oysters from harvest to consumption. The result of the exposure step was combined with a Beta-Poisson dose-response model to estimate the probability of illness. The model predicted that the average risks per serving of raw oysters were 4.7×10(-4), 6.0×10(-4), 4.7×10(-4) and 3.1×10(-4) for spring, summer, fall and winter, respectively. Sensitivity analyses indicated that the most influential variables on the risk of illness were the total density of V. parahaemolyticus at harvest, transport temperature, relative prevalence of pathogenic strains and storage time at retail. Only storage time under refrigeration at retail showed negative correlation with the risk of illness. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Simulations of material mixing in laser-driven reshock experiments

    NASA Astrophysics Data System (ADS)

    Haines, Brian M.; Grinstein, Fernando F.; Welser-Sherrill, Leslie; Fincke, James R.

    2013-02-01

    We perform simulations of a laser-driven reshock experiment [Welser-Sherrill et al., High Energy Density Phys. (unpublished)] in the strong-shock high energy-density regime to better understand material mixing driven by the Richtmyer-Meshkov instability. Validation of the simulations is based on direct comparison of simulation and radiographic data. Simulations are also compared with published direct numerical simulation and the theory of homogeneous isotropic turbulence. Despite the fact that the flow is neither homogeneous, isotropic nor fully turbulent, there are local regions in which the flow demonstrates characteristics of homogeneous isotropic turbulence. We identify and isolate these regions by the presence of high levels of turbulent kinetic energy (TKE) and vorticity. After reshock, our analysis shows characteristics consistent with those of incompressible isotropic turbulence. Self-similarity and effective Reynolds number assessments suggest that the results are reasonably converged at the finest resolution. Our results show that in shock-driven transitional flows, turbulent features such as self-similarity and isotropy only fully develop once de-correlation, characteristic vorticity distributions, and integrated TKE, have decayed significantly. Finally, we use three-dimensional simulation results to test the performance of two-dimensional Reynolds-averaged Navier-Stokes simulations. In this context, we also test a presumed probability density function turbulent mixing model extensively used in combustion applications.

  18. Variations in Ionospheric Peak Electron Density During Sudden Stratospheric Warmings in the Arctic Region

    NASA Astrophysics Data System (ADS)

    Yasyukevich, A. S.

    2018-04-01

    The focus of the paper is the ionospheric disturbances during sudden stratospheric warming (SSW) events in the Arctic region. This study examines the ionospheric behavior during 12 SSW events, which occurred in the Northern Hemisphere over 2006-2013, based on vertical sounding data from DPS-4 ionosonde located in Norilsk (88.0°E, 69.2°N). Most of the addressed events show that despite generally quiet geomagnetic conditions, notable changes in the ionospheric behavior are observed during SSWs. During the SSW evolution and peak phases, there is a daytime decrease in NmF2 values at 10-20% relative to background level. After the SSW maxima, in contrast, midday NmF2 surpasses the average monthly values for 10-20 days. These changes in the electron density are observed for both strong and weak stratospheric warmings occurring at midwinter. The revealed SSW effects in the polar ionosphere are assumed to be associated with changes in the thermospheric neutral composition, affecting the F2-layer electron density. Analysis of the Global Ultraviolet Imager data revealed the positive variations in the O/N2 ratio within the thermosphere during SSW peak and recovery periods. Probable mechanisms for SSW impact on the state of the high-latitude neutral thermosphere and ionosphere are discussed.

  19. Quasar target selection fiber efficiency

    NASA Astrophysics Data System (ADS)

    Newberg, Heidi; Yanny, Brian

    1996-05-01

    We present estimates of the efficiency for finding QSOs as a function of limiting magnitude and galactic latitude. From these estimates, we have formulated a target selection strategy that should net 80,000 QSOs in the north galactic cap with an average of 70 fibers per plate, not including fibers reserved for high-redshift quasars. With this plan, we expect 54% of the targets to be QSOs. The North Galactic Cap is divided into two zones of high and low stellar density. We use about five times as many fibers for QSO candidates in the half of the survey with the lower stellar density as we use in the half with higher stellar density. The current plan assigns 15% of the fibers to FIRST radio sources; if these are not available, those fibers would be allocated to lower probability QSO sources, dropping the total number of QSOs by a small factor (5%). We will find about 17,000 additional quasars in the southern strips, and maybe a few more at very high redshift. Use was made of two data sets: the star and quasar simulated test data generated by Don Schneider, and the data from UJFN plate surveys by Koo (1986) and Kron (1980). This data was compared to results from the Palomar-Green Survey and a recent survey by Pat Osmer and collaborators.

  20. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  1. M to L cone ratios determine eye sizes and baseline refractions in chickens.

    PubMed

    Gisbert, Sandra; Schaeffel, Frank

    2018-07-01

    Following a hypothesis raised by M. and J. Neitz, Seattle, we have tested whether the abundance and the ratio of Long wavelength-sensitive (L) to Middle wavelength-sensitive (M) cones may affect eye size and development of myopia in the chicken. Fourteen chickens were treated with frosted plastic diffusers in front of one eye on day 10 post-hatching for a period of 7 days to induce deprivation myopia. Ocular dimensions were measured by A-scan ultrasonography at the beginning and at the end of the treatment and development of refractive state was tracked using infrared photorefraction. At the end of the treatment period, L and M cone densities and ratios were analyzed in retinal flat mounts of both myopic and control eyes, using the red and yellow oil droplets as markers. Because large numbers of cones were counted (>10000), software was written in Visual C++ for automated cone detection and density analysis. (1) On average, 9.7 ± 1.7D of deprivation myopia was induced in 7 days (range from 6.8D to 13.7D) with an average increase in axial length by 0.65 ± 0.20 mm (range 0.42 mm-1.00 mm), (2) the increase in vitreous chamber depth was correlated with the increase in myopic refractive error, (3) average central M cone densities were 10,498 cells/mm 2 , and L cone densities 9574 cells/mm 2 . In the periphery, M cone densities were 6343 cells/mm 2 and L cones 5735 cells/mm 2 (4) M to L cone ratios were highly correlated in both eyes of each animal (p < 0.01 in all cases), (5) the most striking finding was that ratios of M to L cones were significantly correlated with vitreous chamber depths and refractive states in the control eyes with normal vision, both in the central and peripheral retinas (p < 0.05 to p < 0.01), (6) M to L cone ratios did however not predict the amount of deprivation myopia that could be induced. M and L cone ratios are most likely genetically determined in each animal. The more L cones, the deeper the vitreous chambers and the more myopic were the refractions in eyes. M to L cone ratios may determine the set point of emmetropization and thereby ultimately the probability of becoming myopic. Deprivation myopia was not determined by M to L cone ratios. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. A two-scale scattering model with application to the JONSWAP '75 aircraft microwave scatterometer experiment

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1977-01-01

    The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.

  3. The roles of the trading time risks on stock investment return and risks in stock price crashes

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Dong, Zhi-Wei; Yang, Guo-Hui; Long, Chao

    2017-03-01

    The roles of the trading time risks (TTRs) on stock investment return and risks are investigated in the condition of stock price crashes with Hushen300 data (CSI300) and Dow Jones Industrial Average (ˆDJI), respectively. In order to describe the TTR, we employ the escape time that the stock price drops from the maximum to minimum value in a data window length (DWL). After theoretical and empirical research on probability density function of return, the results in both ˆDJI and CSI300 indicate that: (i) As increasing DWL, the expectation of returns and its stability are weakened. (ii) An optimal TTR is related to a maximum return and minimum risk of stock investment in stock price crashes.

  4. A compound scattering pdf for the ultrasonic echo envelope and its relationship to K and Nakagami distributions.

    PubMed

    Shankar, P Mohana

    2003-03-01

    A compound probability density function (pdf) is presented to describe the envelope of the backscattered echo from tissue. This pdf allows local and global variation in scattering cross sections in tissue. The ultrasonic backscattering cross sections are assumed to be gamma distributed. The gamma distribution also is used to model the randomness in the average cross sections. This gamma-gamma model results in the compound scattering pdf for the envelope. The relationship of this compound pdf to the Rayleigh, K, and Nakagami distributions is explored through an analysis of the signal-to-noise ratio of the envelopes and random number simulations. The three parameter compound pdf appears to be flexible enough to represent envelope statistics giving rise to Rayleigh, K, and Nakagami distributions.

  5. Terrestrial Ozone Depletion Due to a Milky Way Gamma-Ray Burst

    NASA Technical Reports Server (NTRS)

    Thomas, Brian C.; Jackman, Charles H.; Melott, Adrian L.; Laird, Claude M.; Stolarski, Richard S.; Gehrels, Neil; Cannizzo, John K.; Hogan, Daniel P.

    2005-01-01

    Based on cosmological rates, it is probable that at least once in the last Gy the Earth has been irradiated by a gamma-ray burst in our Galaxy from within 2 kpc. Using a two-dimensional atmospheric model we have computed the effects upon the Earth's atmosphere of one such burst. A ten second burst delivering 100 kJ/sq m to the Earth results in globally averaged ozone depletion of 35%, with depletion reaching 55% at some latitudes. Significant global depletion persists for over 5 years after the burst. This depletion would have dramatic implications for life since a 50% decrease in ozone column density results in approximately three times the normal UVB flux. Widespread extinctions are likely, based on extrapolation from UVB sensitivity of modern organisms.

  6. Quantum dynamics of water dissociative chemisorption on rigid Ni(111): An approximate nine-dimensional treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bin, E-mail: bjiangch@ustc.edu.cn, E-mail: hguo@unm.edu; Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131; Song, Hongwei

    The quantum dynamics of water dissociative chemisorption on the rigid Ni(111) surface is investigated using a recently developed nine-dimensional potential energy surface. The quantum dynamical model includes explicitly seven degrees of freedom of D{sub 2}O at fixed surface sites, and the final results were obtained with a site-averaging model. The mode specificity in the site-specific results is reported and analyzed. Finally, the approximate sticking probabilities for various vibrationally excited states of D{sub 2}O are obtained considering surface lattice effects and formally all nine degrees of freedom. The comparison with experiment reveals the inaccuracy of the density functional theory and suggestsmore » the need to improve the potential energy surface.« less

  7. Correlated continuous time random walk and option pricing

    NASA Astrophysics Data System (ADS)

    Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao

    2016-04-01

    In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.

  8. LES, DNS and RANS for the analysis of high-speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Taulbee, Dale B.; Adumitroaie, Virgil; Sabini, George J.; Shieh, Geoffrey S.

    1994-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Sep. 1993 - 1 Sep. 1994, we have focused our efforts on two research problems: (1) developments of 'algebraic' moment closures for statistical descriptions of nonpremixed reacting systems, and (2) assessments of the Dirichlet frequency in presumed scalar probability density function (PDF) methods in stochastic description of turbulent reacting flows. This report provides a complete description of our efforts during this past year as supported by the NASA Langley Research Center under Grant NAG1-1122.

  9. Intermittent particle distribution in synthetic free-surface turbulent flows.

    PubMed

    Ducasse, Lauris; Pumir, Alain

    2008-06-01

    Tracer particles on the surface of a turbulent flow have a very intermittent distribution. This preferential concentration effect is studied in a two-dimensional synthetic compressible flow, both in the inertial (self-similar) and in the dissipative (smooth) range of scales, as a function of the compressibility C . The second moment of the concentration coarse grained over a scale r , n_{r};{2} , behaves as a power law in both the inertial and the dissipative ranges of scale, with two different exponents. The shapes of the probability distribution functions of the coarse-grained density n_{r} vary as a function of scale r and of compressibility C through the combination C/r;{kappa} (kappa approximately 0.5) , corresponding to the compressibility, coarse grained over a domain of scale r , averaged over Lagrangian trajectories.

  10. Coupling of link- and node-ordering in the coevolving voter model.

    PubMed

    Toruniewska, J; Kułakowski, K; Suchecki, K; Hołyst, J A

    2017-10-01

    We consider the process of reaching the final state in the coevolving voter model. There is a coevolution of state dynamics, where a node can copy a state from a random neighbor with probabilty 1-p and link dynamics, where a node can rewire its link to another node of the same state with probability p. That exhibits an absorbing transition to a frozen phase above a critical value of rewiring probability. Our analytical and numerical studies show that in the active phase mean values of magnetization of nodes n and links m tend to the same value that depends on initial conditions. In a similar way mean degrees of spins up and spins down become equal. The system obeys a special statistical conservation law since a linear combination of both types magnetizations averaged over many realizations starting from the same initial conditions is a constant of motion: Λ≡(1-p)μm(t)+pn(t)=const., where μ is the mean node degree. The final mean magnetization of nodes and links in the active phase is proportional to Λ while the final density of active links is a square function of Λ. If the rewiring probability is above a critical value and the system separates into disconnected domains, then the values of nodes and links magnetizations are not the same and final mean degrees of spins up and spins down can be different.

  11. Ecology of a Maryland population of black rat snakes (Elaphe o. obsoleta)

    USGS Publications Warehouse

    Stickel, L.F.; Stickel, W.H.; Schmid, F.C.

    1980-01-01

    Behavior, growth and age of black rat snakes under natural conditions were investigated by mark-recapture methods at the Patuxent Wildlife Research Center for 22 years (1942-1963), with limited observations for 13 more years (1964-1976). Over the 35-year period, 330 snakes were recorded a total of 704 times. Individual home ranges remained stable for many years; male ranges averaged at least 600 m in diam and female ranges at least 500 m, each including a diversity of habitats, evidenced also in records of foods. Population density was low, probably less than 0.5 snake/ha. Peak activity of both sexes was in May and June, with a secondary peak in September. Large trees in the midst of open areas appeared to serve a significant functional role in the behavioral life pattern of the snake population. Male combat was observed three times in the field. Male snakes grew more rapidly than females, attained larger sizes and lived longer. Some individuals of both sexes probably lived 20 years or more. Weight-length relationships changed as the snakes grew and developed heavier bodies in proportion to length. Growth apparently continued throughout life. Some individuals, however, both male and female, stopped growing for periods of I or 2 years and then resumed, a condition probably related to poor health, suggested by skin ailments.

  12. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  13. Vacuum quantum stress tensor fluctuations: A diagonalization approach

    NASA Astrophysics Data System (ADS)

    Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.

    2018-01-01

    Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.

  14. Evidence for skipped spawning in a potamodromous cyprinid, humpback chub (Gila cypha), with implications for demographic parameter estimates

    USGS Publications Warehouse

    Pearson, Kristen Nicole; Kendall, William L.; Winkelman, Dana L.; Persons, William R.

    2015-01-01

    Our findings reveal evidence for skipped spawning in a potamodromous cyprinid, humpback chub (HBC; Gila cypha  ). Using closed robust design mark-recapture models, we found, on average, spawning HBC transition to the skipped spawning state () with a probability of 0.45 (95% CRI (i.e. credible interval): 0.10, 0.80) and skipped spawners remain in the skipped spawning state () with a probability of 0.60 (95% CRI: 0.26, 0.83), yielding an average spawning cycle of every 2.12 years, conditional on survival. As a result, migratory skipped spawners are unavailable for detection during annual sampling events. If availability is unaccounted for, survival and detection probability estimates will be biased. Therefore, we estimated annual adult survival probability (S), while accounting for skipped spawning, and found S remained reasonably stable throughout the study period, with an average of 0.75 ((95% CRI: 0.66, 0.82), process varianceσ2 = 0.005), while skipped spawning probability was highly dynamic (σ2 = 0.306). By improving understanding of HBC spawning strategies, conservation decisions can be based on less biased estimates of survival and a more informed population model structure.

  15. Physical and biochemical properties of green banana flour.

    PubMed

    Suntharalingam, S; Ravindran, G

    1993-01-01

    Banana flour prepared from two cooking banana varieties, namely 'Alukehel' and 'Monthan', were evaluated for their physical and biochemical characteristics. The yields of flour averaged 31.3% for 'Alukehel' and 25.5% for 'Monthan'. The pH of the flour ranged from 5.4 to 5.7. The bulk density and particle size distribution were also measured. The average chemical composition (% dry matter) of the flours were as follows: crude protein, 3.2; crude fat, 1.3; ash, 3.7; neutral detergent fiber, 8.9; acid detergent fiber, 3.8; cellulose, 3.1; lignin, 1.0 and hemicellulose, 5.0. Carbohydrate composition indicated the flour to contain 2.8% soluble sugars, 70.0% starch and 12.0% non-starch polysaccharides. Potassium is the predominant mineral in banana flour. Fresh green banana is a good source of vitamin C, but almost 65% is lost during the preparation of flour. Oxalate content (1.1-1.6%) of banana flour is probably nutritionally insignificant. The overall results are suggestive of the potential of green bananas as a source of flour.

  16. Gyrokinetic Simulations of Transport Scaling and Structure

    NASA Astrophysics Data System (ADS)

    Hahm, Taik Soo

    2001-10-01

    There is accumulating evidence from global gyrokinetic particle simulations with profile variations and experimental fluctuation measurements that microturbulence, with its time-averaged eddy size which scales with the ion gyroradius, can cause ion thermal transport which deviates from the gyro-Bohm scaling. The physics here can be best addressed by large scale (rho* = rho_i/a = 0.001) full torus gyrokinetic particle-in-cell turbulence simulations using our massively parallel, general geometry gyrokinetic toroidal code with field-aligned mesh. Simulation results from device-size scans for realistic parameters show that ``wave transport'' mechanism is not the dominant contribution for this Bohm-like transport and that transport is mostly diffusive driven by microscopic scale fluctuations in the presence of self-generated zonal flows. In this work, we analyze the turbulence and zonal flow statistics from simulations and compare to nonlinear theoretical predictions including the radial decorrelation of the transport events by zonal flows and the resulting probability distribution function (PDF). In particular, possible deviation of the characteristic radial size of transport processes from the time-averaged radial size of the density fluctuation eddys will be critically examined.

  17. Insights into bioassessment of marine pollution using body-size distinctness of planktonic ciliates based on a modified trait hierarchy.

    PubMed

    Xu, Henglong; Jiang, Yong; Xu, Guangjian

    2016-06-15

    Based on a modified trait hierarchy of body-size units, the feasibility for bioassessment of water pollution using body-size distinctness of planktonic ciliates was studied in a semi-enclosed bay, northern China. An annual dataset was collected at five sampling stations within a gradient of heavy metal contaminants. Results showed that: (1) in terms of probability density, the body-size spectra of the ciliates represented significant differences among the five stations; (2) bootstrap average analysis demonstrated a spatial variation in body-size rank patterns in response to pollution stress due to heavy metals; and (3) the average body-size distinctness (Δz(+)) and variation in body-size distinctness (Λz(+)), based on the modified trait hierarchy, revealed a clear departure pattern from the expected body-size spectra in areas with pollutants. These results suggest that the body-size diversity measures based on the modified trait hierarchy of the ciliates may be used as a potential indicator of marine pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  19. Investigation of the relation between the return periods of major drought characteristics using copula functions

    NASA Astrophysics Data System (ADS)

    Hüsami Afşar, Mehdi; Unal Şorman, Ali; Tugrul Yilmaz, Mustafa

    2016-04-01

    Different drought characteristics (e.g. duration, average severity, and average areal extent) often have monotonic relation that increased magnitude of one often follows a similar increase in the magnitude of the other drought characteristic. Hence it is viable to establish a relationship between different drought characteristics with the goal of predicting one using other ones. Copula functions that relate different variables using their joint and conditional cumulative probability distributions are often used to statistically model the drought characteristics. In this study bivariate and trivariate joint probabilities of these characteristics are obtained over Ankara (Turkey) between 1960 and 2013. Copula-based return period estimation of drought characteristics of duration, average severity, and average areal extent show joint probabilities of these characteristics can be satisfactorily achieved. Among different copula families investigated in this study, elliptical family (i.e. including normal and t-student copula functions) resulted in the lowest root mean square error. "This study was supported by TUBITAK fund #114Y676)."

  20. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  1. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  2. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  4. Blending Multiple Nitrogen Dioxide Data Sources for Neighborhood Estimates of Long-Term Exposure for Health Research.

    PubMed

    Hanigan, Ivan C; Williamson, Grant J; Knibbs, Luke D; Horsley, Joshua; Rolfe, Margaret I; Cope, Martin; Barnett, Adrian G; Cowie, Christine T; Heyworth, Jane S; Serre, Marc L; Jalaludin, Bin; Morgan, Geoffrey G

    2017-11-07

    Exposure to traffic related nitrogen dioxide (NO 2 ) air pollution is associated with adverse health outcomes. Average pollutant concentrations for fixed monitoring sites are often used to estimate exposures for health studies, however these can be imprecise due to difficulty and cost of spatial modeling at the resolution of neighborhoods (e.g., a scale of tens of meters) rather than at a coarse scale (around several kilometers). The objective of this study was to derive improved estimates of neighborhood NO 2 concentrations by blending measurements with modeled predictions in Sydney, Australia (a low pollution environment). We implemented the Bayesian maximum entropy approach to blend data with uncertainty defined using informative priors. We compiled NO 2 data from fixed-site monitors, chemical transport models, and satellite-based land use regression models to estimate neighborhood annual average NO 2 . The spatial model produced a posterior probability density function of estimated annual average concentrations that spanned an order of magnitude from 3 to 35 ppb. Validation using independent data showed improvement, with root mean squared error improvement of 6% compared with the land use regression model and 16% over the chemical transport model. These estimates will be used in studies of health effects and should minimize misclassification bias.

  5. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  6. Evolution of a hybrid micro-macro entangled state of the qubit-oscillator system via the generalized rotating wave approximation

    NASA Astrophysics Data System (ADS)

    Chakrabarti, R.; Yogesh, V.

    2016-04-01

    We study the evolution of the hybrid entangled states in a bipartite (ultra) strongly coupled qubit-oscillator system. Using the generalized rotating wave approximation the reduced density matrices of the qubit and the oscillator are obtained. The reduced density matrix of the oscillator yields the phase space quasi probability distributions such as the diagonal P-representation, the Wigner W-distribution and the Husimi Q-function. In the strong coupling regime the Q-function evolves to uniformly separated macroscopically distinct Gaussian peaks representing ‘kitten’ states at certain specified times that depend on multiple time scales present in the interacting system. The ultrastrong coupling strength of the interaction triggers appearance of a large number of modes that quickly develop a randomization of their phase relationships. A stochastic averaging of the dynamical quantities sets in, and leads to the decoherence of the system. The delocalization in the phase space of the oscillator is studied by using the Wehrl entropy. The negativity of the W-distribution reflects the departure of the oscillator from the classical states, and allows us to study the underlying differences between various information-theoretic measures such as the Wehrl entropy and the Wigner entropy. Other features of nonclassicality such as the existence of the squeezed states and appearance of negative values of the Mandel parameter are realized during the course of evolution of the bipartite system. In the parametric regime studied here these properties do not survive in the time-averaged limit.

  7. Scalar decay in two-dimensional chaotic advection and Batchelor-regime turbulence

    NASA Astrophysics Data System (ADS)

    Fereday, D. R.; Haynes, P. H.

    2004-12-01

    This paper considers the decay in time of an advected passive scalar in a large-scale flow. The relation between the decay predicted by "Lagrangian stretching theories," which consider evolution of the scalar field within a small fluid element and then average over many such elements, and that observed at large times in numerical simulations, associated with emergence of a "strange eigenmode" is discussed. Qualitative arguments are supported by results from numerical simulations of scalar evolution in two-dimensional spatially periodic, time aperiodic flows, which highlight the differences between the actual behavior and that predicted by the Lagrangian stretching theories. In some cases the decay rate of the scalar variance is different from the theoretical prediction and determined globally and in other cases it apparently matches the theoretical prediction. An updated theory for the wavenumber spectrum of the scalar field and a theory for the probability distribution of the scalar concentration are presented. The wavenumber spectrum and the probability density function both depend on the decay rate of the variance, but can otherwise be calculated from the statistics of the Lagrangian stretching history. In cases where the variance decay rate is not determined by the Lagrangian stretching theory, the wavenumber spectrum for scales that are much smaller than the length scale of the flow but much larger than the diffusive scale is argued to vary as k-1+ρ, where k is wavenumber, and ρ is a positive number which depends on the decay rate of the variance γ2 and on the Lagrangian stretching statistics. The probability density function for the scalar concentration is argued to have algebraic tails, with exponent roughly -3 and with a cutoff that is determined by diffusivity κ and scales roughly as κ-1/2 and these predictions are shown to be in good agreement with numerical simulations.

  8. Nitrogen oxide emission calculation for post-Panamax container ships by using engine operation power probability as weighting factor: A slow-steaming case.

    PubMed

    Cheng, Chih-Wen; Hua, Jian; Hwang, Daw-Shang

    2018-06-01

    In this study, the nitrogen oxide (NO x ) emission factors and total NO x emissions of two groups of post-Panamax container ships operating on a long-term slow-steaming basis along Euro-Asian routes were calculated using both the probability density function of engine power levels and the NO x emission function. The main engines of the five sister ships in Group I satisfied the Tier I emission limit stipulated in MARPOL (International Convention for the Prevention of Pollution from Ships) Annex VI, and those in Group II satisfied the Tier II limit. The calculated NO x emission factors of the Group I and Group II ships were 14.73 and 17.85 g/kWhr, respectively. The total NO x emissions of the Group II ships were determined to be 4.4% greater than those of the Group I ships. When the Tier II certification value was used to calculate the average total NO x emissions of Group II engines, the result was lower than the actual value by 21.9%. Although fuel consumption and carbon dioxide (CO 2 ) emissions were increased by 1.76% because of slow steaming, the NO x emissions were markedly reduced by 17.2%. The proposed method is more effective and accurate than the NO x Technical Code 2008. Furthermore, it can be more appropriately applied to determine the NO x emissions of international shipping inventory. The usage of operating power probability density function of diesel engines as the weighting factor and the NO x emission function obtained from test bed for calculating NO x emissions is more accurate and practical. The proposed method is suitable for all types and purposes of diesel engines, irrespective of their operating power level. The method can be used to effectively determine the NO x emissions of international shipping and inventory applications and should be considered in determining the carbon tax to be imposed in the future.

  9. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  10. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  11. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  12. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  13. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  14. Heterozygosity-fitness correlations in a wild mammal population: accounting for parental and environmental effects.

    PubMed

    Annavi, Geetha; Newman, Christopher; Buesching, Christina D; Macdonald, David W; Burke, Terry; Dugdale, Hannah L

    2014-06-01

    HFCs (heterozygosity-fitness correlations) measure the direct relationship between an individual's genetic diversity and fitness. The effects of parental heterozygosity and the environment on HFCs are currently under-researched. We investigated these in a high-density U.K. population of European badgers (Meles meles), using a multimodel capture-mark-recapture framework and 35 microsatellite loci. We detected interannual variation in first-year, but not adult, survival probability. Adult females had higher annual survival probabilities than adult males. Cubs with more heterozygous fathers had higher first-year survival, but only in wetter summers; there was no relationship with individual or maternal heterozygosity. Moist soil conditions enhance badger food supply (earthworms), improving survival. In dryer years, higher indiscriminate mortality rates appear to mask differential heterozygosity-related survival effects. This paternal interaction was significant in the most supported model; however, the model-averaged estimate had a relative importance of 0.50 and overlapped zero slightly. First-year survival probabilities were not correlated with the inbreeding coefficient (f); however, small sample sizes limited the power to detect inbreeding depression. Correlations between individual heterozygosity and inbreeding were weak, in line with published meta-analyses showing that HFCs tend to be weak. We found support for general rather than local heterozygosity effects on first-year survival probability, and g2 indicated that our markers had power to detect inbreeding. We emphasize the importance of assessing how environmental stressors can influence the magnitude and direction of HFCs and of considering how parental genetic diversity can affect fitness-related traits, which could play an important role in the evolution of mate choice.

  15. PYFLOW 2.0. A new open-source software for quantifying the impact and depositional properties of dilute pyroclastic density currents

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2017-04-01

    Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46

  16. Duration of resuscitation efforts for in-hospital cardiac arrest by predicted outcomes: Insights from Get With The Guidelines – Resuscitation✩

    PubMed Central

    Bradley, Steven M.; Liu, Wenhui; Chan, Paul S.; Girotra, Saket; Goldberger, Zahary D.; Valle, Javier A.; Perman, Sarah M.; Nallamothu, Brahmajee K.

    2017-01-01

    Background The duration of resuscitation efforts has implications for patient survival of in-hospital cardiac arrest (IHCA). It is unknown if patients with better predicted survival of IHCA receive longer attempts at resuscitation. Methods In a multicenter observational cohort of 40,563 adult non-survivors of resuscitation efforts for IHCA between 2000 and 2012, we determined the pre-arrest predicted probability of survival to discharge with good neurologic status, categorized into very low (<1%), low (1–3%), average (>3%–15%), and above average (>15%). We then determined the association between predicted arrest survival probability and the duration of resuscitation efforts. Results The median duration of resuscitation efforts among all non-survivors was 19 min (interquartile range 13–28 min). Overall, the median duration of resuscitation efforts was longer in non-survivors with a higher predicted probability of survival with good neurologic status (median of 16, 17, 20, and 23 min among the groups predicted to have very low, low, average, and above probabilities, respectively; P < 0.001). However, the duration of resuscitation was often discordant with predicted survival, including longer than median duration of resuscitation efforts in 40.4% of patients with very low predicted survival and shorter than median duration of resuscitation efforts in 31.9% of patients with above average predicted survival. Conclusions The duration of resuscitation efforts in patients with IHCA was generally consistent with their predicted survival. However, nearly a third of patients with above average predicted outcomes received shorter than average (less than 19 min) duration of resuscitation efforts. PMID:28039064

  17. Laboratory-Tutorial Activities for Teaching Probability

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…

  18. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  19. The impact of land ownership, firefighting, and reserve status on fire probability in California

    NASA Astrophysics Data System (ADS)

    Starrs, Carlin Frances; Butsic, Van; Stephens, Connor; Stewart, William

    2018-03-01

    The extent of wildfires in the western United States is increasing, but how land ownership, firefighting, and reserve status influence fire probability is unclear. California serves as a unique natural experiment to estimate the impact of these factors, as ownership is split equally between federal and non-federal landowners; there is a relatively large proportion of reserved lands where extractive uses are prohibited and fire suppression is limited; and land ownership and firefighting responsibility are purposefully not always aligned. Panel Poisson regression techniques and pre-regression matching were used to model changes in annual fire probability from 1950-2015 on reserve and non-reserve lands on federal and non-federal ownerships across four vegetation types: forests, rangelands, shrublands, and forests without commercial species. Fire probability was found to have increased over time across all 32 categories. A marginal effects analysis showed that federal ownership and firefighting was associated with increased fire probability, and that the difference in fire probability on federal versus non-federal lands is increasing over time. Ownership, firefighting, and reserve status, played roughly equal roles in determining fire probability, and were found to have much greater influence than average maximum temperature (°C) during summer months (June, July, August), average annual precipitation (cm), and average annual topsoil moisture content by volume, demonstrating the critical role these factors play in western fire regimes and the importance of including them in future analysis focused on understanding and predicting wildfire in the Western United States.

  20. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  1. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  2. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  3. The Effect of Inhibitory Neuron on the Evolution Model of Higher-Order Coupling Neural Oscillator Population

    PubMed Central

    Qi, Yi; Wang, Rubin; Jiao, Xianfa; Du, Ying

    2014-01-01

    We proposed a higher-order coupling neural network model including the inhibitory neurons and examined the dynamical evolution of average number density and phase-neural coding under the spontaneous activity and external stimulating condition. The results indicated that increase of inhibitory coupling strength will cause decrease of average number density, whereas increase of excitatory coupling strength will cause increase of stable amplitude of average number density. Whether the neural oscillator population is able to enter the new synchronous oscillation or not is determined by excitatory and inhibitory coupling strength. In the presence of external stimulation, the evolution of the average number density is dependent upon the external stimulation and the coupling term in which the dominator will determine the final evolution. PMID:24516505

  4. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  5. [Analysis of main risk factors causing foodborne diseases in food catering business].

    PubMed

    Fan, Yong-xiang; Liu, Xiu-mei; Bao, Yi-dan

    2011-06-01

    To study main risk factors that cause foodborne diseases in food catering business. Data from references and investigations conducted in food catering units were used to establish models which based on @Risk 4.5 with Monte Carlo method referring to food handling practice model (FHPM) to make risk assessment on factors of food contamination in food catering units. The Beta-Poisson models on dose-response relationship to Salmonella (developed by WHO/FAO and United States Department of Agriculture) and Vibrio parahaemolyticus (developed by US FDA) were used in this article to analyze the dose-response relationship of pathogens. The average probability of food poisoning by consuming Salmonella contaminated cooked meat under refrigeration was 1.96 × 10(-4) which was 1/2800 of the food under non-refrigeration (the average probability of food poisoning was 0.35 at room temperature 25°C). The average probability by consuming 6 hours stored meat under room temperature was 0.11 which was 16 times of 2 hours storage (6.79 × 10(-3)). The average probability by consuming contaminated meat without fully cooking was 1.71 × 10(-4) which was 100 times of consuming fully cooked meat (1.88 × 10(-6)). The probability growth of food poisoning by consuming Vibrio parahaemolyticus contaminated fresh seafood was proportional with contamination level and prevalence. The primary contamination level, storage temperature and time, cooking process and cross contamination are important factors of catering food safety.

  6. The contribution of testosterone to skeletal development and maintenance: lessons from the androgen insensitivity syndrome.

    PubMed

    Marcus, R; Leary, D; Schneider, D L; Shane, E; Favus, M; Quigley, C A

    2000-03-01

    Although androgen status affects bone mass in women and men, an androgen requirement for skeletal normalcy has not been established. Women with androgen insensitivity syndrome (AIS) have 46,XY genotypes with androgen receptor abnormalities rendering them partially or completely refractory to androgen. Twenty-eight women with AIS (22 complete and 6 high grade partial), aged 11-65 yr, responded to questionnaires about health history, gonadal surgery, and exogenous estrogen use and underwent bone mineral density (BMD) assessment by dual energy x-ray absortiometry. BMD values at the lumbar spine and proximal femur were compared to age-specific female normative values and listed as z-scores. Average height for adults in this cohort, 174 cm (68.5 in.), was moderately increased compared with the average height of adult American women of 162.3 cm, with skewing toward higher values: 5 women exceeded 6 ft in height, and 30% of the 18 adult women with complete AIS exceeded 5 ft, 11 in. in height. The average lumbar spine and hip BMD z-scores of the 6 women with partial AIS did not differ from population norms. In contrast, the average lumbar spine BMD z-score of women with complete AIS was significantly reduced at -1.08 (P = 0.0003), whereas the average value for hip BMD did not differ from normal. When BMD was compared between women who reported good estrogen replacement therapy compliance and those who reported poor compliance, there was a significantly greater deficit at the spine for women with poor compliance (z = -2.15 +/- 0.15 vs. -0.75 +/- 0.28; P < .0001). Furthermore, hip BMD was also significantly reduced in the noncompliant group (z = -0.95 +/- .40). Comparison of BMD values to normative male standards gave z-score reductions (z = -1.81 +/- 0.36) greater than those observed with female standards. Because of the high prevalence of tall stature in this study sample, we calculated bone mineral apparent density, a variable that adjusts for differences in bone size. Even for the estrogen-compliant group, bone mineral apparent density z-scores were subnormal at both the spine (z = -1.3 +/- 0.43; P < 0.01) and the hip (z = -1.38 +/- 0.28; P = 0.017). Six women with complete AIS had sustained cortical bone fractures, of whom 3 reported multiple (>3) fractures. We conclude that even when compliance to exogenous estrogen use is excellent, women with complete AIS show moderate deficits in spine BMD, averaging close to 1 SD from normative means, and that with correction of BMD for bone size, skeletal deficits are magnified and include the proximal femur. The results suggest that severe osteopenia in some women with AIS probably reflects a component of inadequate estrogen replacement rather than androgen lack alone.

  7. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  8. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  9. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

    PubMed

    Willson, Stephen J

    2012-01-01

    A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

  10. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  11. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study

    PubMed Central

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-01-01

    Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178

  12. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  13. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  14. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  15. Randomness and diversity matter in the maintenance of the public resources

    NASA Astrophysics Data System (ADS)

    Liu, Aizhi; Zhang, Yanling; Chen, Xiaojie; Sun, Changyin

    2017-03-01

    Most previous models about the public goods game usually assume two possible strategies, i.e., investing all or nothing. The real-life situation is rarely all or nothing. In this paper, we consider that multiple strategies are adopted in a well-mixed population, and each strategy represents an investment to produce the public goods. Past efforts have found that randomness matters in the evolution of fairness in the ultimatum game. In the framework involving no other mechanisms, we study how diversity and randomness influence the average investment of the population defined by the mean value of all individuals' strategies. The level of diversity is increased by increasing the strategy number, and the level of randomness is increased by increasing the mutation probability, or decreasing the population size or the selection intensity. We find that a higher level of diversity and a higher level of randomness lead to larger average investment and favor more the evolution of cooperation. Under weak selection, the average investment changes very little with the strategy number, the population size, and the mutation probability. Under strong selection, the average investment changes very little with the strategy number and the population size, but changes a lot with the mutation probability. Under intermediate selection, the average investment increases significantly with the strategy number and the mutation probability, and decreases significantly with the population size. These findings are meaningful to study how to maintain the public resource.

  16. Spatial distributions of the red palm mite, Raoiella indica (Acari: Tenuipalpidae) on coconut and their implications for development of efficient sampling plans.

    PubMed

    Roda, A; Nachman, G; Hosein, F; Rodrigues, J C V; Peña, J E

    2012-08-01

    The red palm mite (Raoiella indica), an invasive pest of coconut, entered the Western hemisphere in 2004, then rapidly spread through the Caribbean and into Florida, USA. Developing effective sampling methods may aid in the timely detection of the pest in a new area. Studies were conducted to provide and compare intra tree spatial distribution of red palm mite populations on coconut in two different geographical areas, Trinidad and Puerto Rico, recently invaded by the mite. The middle stratum of a palm hosted significantly more mites than fronds from the upper or lower canopy and fronds from the lower stratum, on average, had significantly fewer mites than the two other strata. The mite populations did not vary within a frond. Mite densities on the top section of the pinna had significantly lower mite densities than the two other sections, which were not significantly different from each other. In order to improve future sampling plans for the red palm mite, the data was used to estimate the variance components associated with the various levels of the hierarchical sampling design. Additionally, presence-absence data were used to investigate the probability of no mites being present in a pinna section randomly chosen from a frond inhabited by mites at a certain density. Our results show that the most precise density estimate at the plantation level is to sample one pinna section per tree from as many trees as possible.

  17. Epidemics in interconnected small-world networks.

    PubMed

    Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong

    2015-01-01

    Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.

  18. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  19. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  20. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

Top