Sample records for conditional probability density

  1. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  2. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  3. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  4. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  5. A theory of stationarity and asymptotic approach in dissipative systems

    NASA Astrophysics Data System (ADS)

    Rubel, Michael Thomas

    2007-05-01

    The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.

  6. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  7. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  8. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  9. A note on the problem of choosing a model of the universe. II

    NASA Astrophysics Data System (ADS)

    Skalsky, Vladimir

    1989-05-01

    The value of the mean mass density (rho) of the universe is examined. It is shown that there is a difference between the present terrestrial conditions and the initial conditions of the universe expansion and that, for the sphere of the physical boundary conditions represented by Planck's values (when the present evolution phase of the universe was probably decided), there are serious limitations for the value of rho. It is postulated on the basis of these limiting conditions that some cause may exist for which the condition corresponding to the critical mass density of the universe was realized.

  10. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  11. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  12. Timescales of isotropic and anisotropic cluster collapse

    NASA Astrophysics Data System (ADS)

    Bartelmann, M.; Ehlers, J.; Schneider, P.

    1993-12-01

    From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.

  13. Quantum mechanical probability current as electromagnetic 4-current from topological EM fields

    NASA Astrophysics Data System (ADS)

    van der Mark, Martin B.

    2015-09-01

    Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.

  14. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  17. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  18. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  19. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  20. Probabilistic cluster labeling of imagery data

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1980-01-01

    The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.

  1. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  2. Single-molecule stochastic times in a reversible bimolecular reaction

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Valleriani, Angelo

    2012-08-01

    In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.

  3. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  4. Stochastic analysis of particle movement over a dune bed

    USGS Publications Warehouse

    Lee, Baum K.; Jobson, Harvey E.

    1977-01-01

    Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)

  5. The influence of surface properties on the plasma dynamics in radio-frequency driven oxygen plasmas: Measurements and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greb, Arthur; Niemi, Kari; O'Connell, Deborah

    2013-12-09

    Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.

  6. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  7. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  8. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  9. TRPM7 Is Required for Normal Synapse Density, Learning, and Memory at Different Developmental Stages.

    PubMed

    Liu, Yuqiang; Chen, Cui; Liu, Yunlong; Li, Wei; Wang, Zhihong; Sun, Qifeng; Zhou, Hang; Chen, Xiangjun; Yu, Yongchun; Wang, Yun; Abumaria, Nashat

    2018-06-19

    The TRPM7 chanzyme contributes to several biological and pathological processes in different tissues. However, its role in the CNS under physiological conditions remains unclear. Here, we show that TRPM7 knockdown in hippocampal neurons reduces structural synapse density. The synapse density is rescued by the α-kinase domain in the C terminus but not by the ion channel region of TRPM7 or by increasing extracellular concentrations of Mg 2+ or Zn 2+ . Early postnatal conditional knockout of TRPM7 in mice impairs learning and memory and reduces synapse density and plasticity. TRPM7 knockdown in the hippocampus of adult rats also impairs learning and memory and reduces synapse density and synaptic plasticity. In knockout mice, restoring expression of the α-kinase domain in the brain rescues synapse density/plasticity and memory, probably by interacting with and phosphorylating cofilin. These results suggest that brain TRPM7 is important for having normal synaptic and cognitive functions under physiological, non-pathological conditions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  10. Population density shapes patterns of survival and reproduction in Eleutheria dichotoma (Hydrozoa: Anthoathecata).

    PubMed

    Dańko, Aleksandra; Schaible, Ralf; Pijanowska, Joanna; Dańko, Maciej J

    2018-01-01

    Budding hydromedusae have high reproductive rates due to asexual reproduction and can occur in high population densities along the coasts, specifically in tidal pools. In laboratory experiments, we investigated the effects of population density on the survival and reproductive strategies of a single clone of Eleutheria dichotoma . We found that sexual reproduction occurs with the highest rate at medium population densities. Increased sexual reproduction was associated with lower budding (asexual reproduction) and survival probability. Sexual reproduction results in the production of motile larvae that can, in contrast to medusae, seek to escape unfavorable conditions by actively looking for better environments. The successful settlement of a larva results in starting the polyp stage, which is probably more resistant to environmental conditions. This is the first study that has examined the life-history strategies of the budding hydromedusa E. dichotoma by conducting a long-term experiment with a relatively large sample size that allowed for the examination of age-specific mortality and reproductive rates. We found that most sexual and asexual reproduction occurred at the beginning of life following a very rapid process of maturation. The parametric models fitted to the mortality data showed that population density was associated with an increase in the rate of aging, an increase in the level of late-life mortality plateau, and a decrease in the hidden heterogeneity in individual mortality rates. The effects of population density on life-history traits are discussed in the context of resource allocation and the r/K-strategies' continuum concept.

  11. Hepatitis disease detection using Bayesian theory

    NASA Astrophysics Data System (ADS)

    Maseleno, Andino; Hidayati, Rohmah Zahroh

    2017-02-01

    This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.

  12. Probability theory for 3-layer remote sensing in ideal gas law environment.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2013-08-26

    We extend the probability model for 3-layer radiative transfer [Opt. Express 20, 10004 (2012)] to ideal gas conditions where a correlation exists between transmission and temperature of each of the 3 layers. The effect on the probability density function for the at-sensor radiances is surprisingly small, and thus the added complexity of addressing the correlation can be avoided. The small overall effect is due to (a) small perturbations by the correlation on variance population parameters and (b) cancellation of perturbation terms that appear with opposite signs in the model moment expressions.

  13. Probability Density Functions of the Solar Wind Driver of the Magnetopshere-Ionosphere System

    NASA Astrophysics Data System (ADS)

    Horton, W.; Mays, M. L.

    2007-12-01

    The solar-wind driven magnetosphere-ionosphere system is a complex dynamical system in that it exhibits (1) sensitivity to initial conditions; (2) multiple space-time scales; (3) bifurcation sequences with hysteresis in transitions between attractors; and (4) noncompositionality. This system is modeled by WINDMI--a network of eight coupled ordinary differential equations which describe the transfer of power from the solar wind through the geomagnetic tail, the ionosphere, and ring current in the system. The model captures both storm activity from the plasma ring current energy, which yields a model Dst index result, and substorm activity from the region 1 field aligned current, yielding model AL and AU results. The input to the model is the solar wind driving voltage calculated from ACE solar wind parameter data, which has a regular coherent component and broad-band turbulent component. Cross correlation functions of the input-output data time series are computed and the conditional probability density function for the occurrence of substorms given earlier IMF conditions are derived. The model shows a high probability of substorms for solar activity that contains a coherent, rotating IMF with magnetic cloud features. For a theoretical model of the imprint of solar convection on the solar wind we have used the Lorenz attractor (Horton et al., PoP, 1999, doi:10.10631.873683) as a solar wind driver. The work is supported by NSF grant ATM-0638480.

  14. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Bremer, J. E.; Harter, T.

    2012-08-01

    Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).

  15. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  16. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum Publishing Corp.

  17. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  18. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  19. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  20. Broadcasting but not receiving: density dependence considerations for SETI signals

    NASA Astrophysics Data System (ADS)

    Smith, Reginald D.

    2009-04-01

    This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.

  1. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  2. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  3. New Concepts in the Evaluation of Biodegradation/Persistence of Chemical Substances Using a Microbial Inoculum

    PubMed Central

    Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han

    2011-01-01

    The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143

  4. Sexual segregation in North American elk: the role of density dependence

    PubMed Central

    Stewart, Kelley M; Walsh, Danielle R; Kie, John G; Dick, Brian L; Bowyer, R Terry

    2015-01-01

    We investigated how density-dependent processes and subsequent variation in nutritional condition of individuals influenced both timing and duration of sexual segregation and selection of resources. During 1999–2001, we experimentally created two population densities of North American elk (Cervus elaphus), a high-density population at 20 elk/km2, and a low-density population at 4 elk/km2 to test hypotheses relative to timing and duration of sexual segregation and variation in selection of resources. We used multi-response permutation procedures to investigate patterns of sexual segregation, and resource selection functions to document differences in selection of resources by individuals in high- and low-density populations during sexual segregation and aggregation. The duration of sexual segregation was 2 months longer in the high-density population and likely was influenced by individuals in poorer nutritional condition, which corresponded with later conception and parturition, than at low density. Males and females in the high-density population overlapped in selection of resources to a greater extent than in the low-density population, probably resulting from density-dependent effects of increased intraspecific competition and lower availability of resources. PMID:25691992

  5. Evolution of probability densities in stochastic coupled map lattices

    NASA Astrophysics Data System (ADS)

    Losson, Jérôme; Mackey, Michael C.

    1995-08-01

    This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.

  6. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  7. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

  8. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  9. Quantum fluctuation theorems and generalized measurements during the force protocol.

    PubMed

    Watanabe, Gentaro; Venkatesh, B Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010); Phys. Rev. E 83, 041114 (2011)], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  10. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  11. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  12. Monte Carlo based protocol for cell survival and tumour control probability in BNCT.

    PubMed

    Ye, S J

    1999-02-01

    A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the 10B(n,alpha)7Li reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the 10B(n,alpha)7Li reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of 10(-3)-10(-5) for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with the unmodified neutron spectrum. A dominant effect of cell-killing yield on tumour cell survival demonstrates the importance of choice of boron carrier drug. However, these calculations do not indicate an unambiguous preference for one drug, due to the large overlap of tumour cell survival in the probable ranges of the cell-killing yield for the two drugs. The cell survival value averaged over a bulky tumour volume is used to predict the overall BNCT therapeutic efficacy, using a simple model of tumour control probability (TCP).

  13. Spatial patterns and biodiversity in off-lattice simulations of a cyclic three-species Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.

    2018-02-01

    Stochastic simulations of cyclic three-species spatial predator-prey models are usually performed in square lattices with nearest-neighbour interactions starting from random initial conditions. In this letter we describe the results of off-lattice Lotka-Volterra stochastic simulations, showing that the emergence of spiral patterns does occur for sufficiently high values of the (conserved) total density of individuals. We also investigate the dynamics in our simulations, finding an empirical relation characterizing the dependence of the characteristic peak frequency and amplitude on the total density. Finally, we study the impact of the total density on the extinction probability, showing how a low population density may jeopardize biodiversity.

  14. Influence of anisotropic turbulence on the orbital angular momentum modes of Hermite-Gaussian vortex beam in the ocean.

    PubMed

    Li, Ye; Yu, Lin; Zhang, Yixin

    2017-05-29

    Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.

  15. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  16. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  17. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  18. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  19. Modeling take-over performance in level 3 conditionally automated vehicles.

    PubMed

    Gold, Christian; Happee, Riender; Bengler, Klaus

    2018-07-01

    Taking over vehicle control from a Level 3 conditionally automated vehicle can be a demanding task for a driver. The take-over determines the controllability of automated vehicle functions and thereby also traffic safety. This paper presents models predicting the main take-over performance variables take-over time, minimum time-to-collision, brake application and crash probability. These variables are considered in relation to the situational and driver-related factors time-budget, traffic density, non-driving-related task, repetition, the current lane and driver's age. Regression models were developed using 753 take-over situations recorded in a series of driving simulator experiments. The models were validated with data from five other driving simulator experiments of mostly unrelated authors with another 729 take-over situations. The models accurately captured take-over time, time-to-collision and crash probability, and moderately predicted the brake application. Especially the time-budget, traffic density and the repetition strongly influenced the take-over performance, while the non-driving-related tasks, the lane and drivers' age explained a minor portion of the variance in the take-over performances. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Does probability of occurrence relate to population dynamics?

    USGS Publications Warehouse

    Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.

    2014-01-01

    Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.

  1. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  2. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  3. Global warming precipitation accumulation increases above the current-climate cutoff scale

    PubMed Central

    Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-01-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693

  4. Global warming precipitation accumulation increases above the current-climate cutoff scale

    NASA Astrophysics Data System (ADS)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.

    2017-02-01

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  5. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  6. Global warming precipitation accumulation increases above the current-climate cutoff scale.

    PubMed

    Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N

    2017-02-07

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.

  7. Global warming precipitation accumulation increases above the current-climate cutoff scale

    DOE PAGES

    Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...

    2017-01-23

    Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less

  8. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  9. Shear coaxial injector atomization phenomena for combusting and non-combusting conditions

    NASA Technical Reports Server (NTRS)

    Pal, S.; Moser, M. D.; Ryan, H. M.; Foust, M. J.; Santoro, R. J.

    1992-01-01

    Measurements of LOX drop size and velocity in a uni-element liquid propellant rocket chamber are presented. The use of the Phase Doppler Particle Analyzer in obtaining temporally-averaged probability density functions of drop size in a harsh rocket environment has been demonstrated. Complementary measurements of drop size/velocity for simulants under cold flow conditions are also presented. The drop size/velocity measurements made for combusting and cold flow conditions are compared, and the results indicate that there are significant differences in the two flowfields.

  10. Quantum fluctuation theorems and generalized measurements during the force protocol

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter; Campisi, Michele; Hänggi, Peter

    2014-03-01

    Generalized measurements of an observable performed on a quantum system during a force protocol are investigated and conditions that guarantee the validity of the Jarzynski equality and the Crooks relation are formulated. In agreement with previous studies by M. Campisi, P. Talkner, and P. Hänggi [Phys. Rev. Lett. 105, 140601 (2010), 10.1103/PhysRevLett.105.140601; Phys. Rev. E 83, 041114 (2011), 10.1103/PhysRevE.83.041114], we find that these fluctuation relations are satisfied for projective measurements; however, for generalized measurements special conditions on the operators determining the measurements need to be met. For the Jarzynski equality to hold, the measurement operators of the forward protocol must be normalized in a particular way. The Crooks relation additionally entails that the backward and forward measurement operators depend on each other. Yet, quite some freedom is left as to how the two sets of operators are interrelated. This ambiguity is removed if one considers selective measurements, which are specified by a joint probability density function of work and measurement results of the considered observable. We find that the respective forward and backward joint probabilities satisfy the Crooks relation only if the measurement operators of the forward and backward protocols are the time-reversed adjoints of each other. In this case, the work probability density function conditioned on the measurement result satisfies a modified Crooks relation. The modification appears as a protocol-dependent factor that can be expressed by the information gained by the measurements during the forward and backward protocols. Finally, detailed fluctuation theorems with an arbitrary number of intervening measurements are obtained.

  11. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  12. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  13. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  14. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  15. Molecular-Level Study of the Effect of Prior Axial Compression/Torsion on the Axial-Tensile Strength of PPTA Fibers

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.

    2013-11-01

    A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.

  16. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  17. Density matrix approach to the hot-electron stimulated photodesorption

    NASA Astrophysics Data System (ADS)

    Kühn, Oliver; May, Volkhard

    1996-07-01

    The dissipative dynamics of the laser-induced nonthermal desorption of small molecules from a metal surface is investigated here. Based on the density matrix formalism a multi-state model is introduced which explicitly takes into account the continuum of electronic states in the metal. Various relaxation mechanisms for the electronic degrees of freedom are shown to govern the desorption dynamics and hence the desorption probability. Particular attention is paid to the modeling of the time dependence of the electron energy distribution in the metal which reflects different excitation conditions.

  18. Analysis of high-resolution foreign exchange data of USD-JPY for 13 years

    NASA Astrophysics Data System (ADS)

    Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki

    2003-06-01

    We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.

  19. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  20. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.

  1. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  2. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  3. The electron localization as the information content of the conditional pair density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbina, Andres S.; Torres, F. Javier; Universidad San Francisco de Quito

    2016-06-28

    In the present work, the information gained by an electron for “knowing” about the position of another electron with the same spin is calculated using the Kullback-Leibler divergence (D{sub KL}) between the same-spin conditional pair probability density and the marginal probability. D{sub KL} is proposed as an electron localization measurement, based on the observation that regions of the space with high information gain can be associated with strong correlated localized electrons. Taking into consideration the scaling of D{sub KL} with the number of σ-spin electrons of a system (N{sup σ}), the quantity χ = (N{sup σ} − 1) D{sub KL}f{submore » cut} is introduced as a general descriptor that allows the quantification of the electron localization in the space. f{sub cut} is defined such that it goes smoothly to zero for negligible densities. χ is computed for a selection of atomic and molecular systems in order to test its capability to determine the region in space where electrons are localized. As a general conclusion, χ is able to explain the electron structure of molecules on the basis of chemical grounds with a high degree of success and to produce a clear differentiation of the localization of electrons that can be traced to the fluctuation in the average number of electrons in these regions.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Bronkhorst, Curt Allan

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  5. Statistics of velocity gradients in two-dimensional Navier-Stokes and ocean turbulence.

    PubMed

    Schorghofer, Norbert; Gille, Sarah T

    2002-02-01

    Probability density functions and conditional averages of velocity gradients derived from upper ocean observations are compared with results from forced simulations of the two-dimensional Navier-Stokes equations. Ocean data are derived from TOPEX satellite altimeter measurements. The simulations use rapid forcing on large scales, characteristic of surface winds. The probability distributions of transverse velocity derivatives from the ocean observations agree with the forced simulations, although they differ from unforced simulations reported elsewhere. The distribution and cross correlation of velocity derivatives provide clear evidence that large coherent eddies play only a minor role in generating the observed statistics.

  6. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  7. Site specific passive acoustic detection and densities of humpback whale calls off the coast of California

    NASA Astrophysics Data System (ADS)

    Helble, Tyler Adam

    Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.

  8. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  9. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  10. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  11. Design rules for quasi-linear nonlinear optical structures

    NASA Astrophysics Data System (ADS)

    Lytel, Richard; Mossman, Sean M.; Kuzyk, Mark G.

    2015-09-01

    The maximization of the intrinsic optical nonlinearities of quantum structures for ultrafast applications requires a spectrum scaling as the square of the energy eigenstate number or faster. This is a necessary condition for an intrinsic response approaching the fundamental limits. A second condition is a design generating eigenstates whose ground and lowest excited state probability densities are spatially separated to produce large differences in dipole moments while maintaining a reasonable spatial overlap to produce large off-diagonal transition moments. A structure whose design meets both conditions will necessarily have large first or second hyperpolarizabilities. These two conditions are fundamental heuristics for the design of any nonlinear optical structure.

  12. Many-body calculations of low energy eigenstates in magnetic and periodic systems with self healing diffusion Monte Carlo: steps beyond the fixed-phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009), Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. A recursive optimization algorithm is derived from the time evolution of the mixed probability density. The mixed probability density is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weigh allows the amplitude of the fix-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC andmore » the fixed-phase approximation [Ortiz, Ceperley and Martin Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it improves simultaneously the node and phase. The algorithm is demonstrated to converge to the nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The method is also applied to obtain low energy excitations with magnetic field or periodic boundary conditions. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.« less

  13. Variation of fan tone steadiness for several inflow conditions

    NASA Technical Reports Server (NTRS)

    Balombin, J. R.

    1978-01-01

    An amplitude probability density function analysis technique for quantifying the degree of fan noise tone steadiness has been applied to data from a fan tested under a variety of inflow conditions. The test conditions included typical static operation, inflow control by a honeycomb/screen device and forward velocity in a wind tunnel simulating flight. The ratio of mean square sinusoidal-to-random signal content in the fundamental and second harmonic tones was found to vary by more than an order-of-magnitude. Some implications of these results concerning the nature of fan noise generation mechanisms are discussed.

  14. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  15. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  16. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  17. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  18. An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, J. Y.; Kitanidis, P. K.

    2013-12-01

    Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.

  19. Small and large wetland fragments are equally suited breeding sites for a ground-nesting passerine.

    PubMed

    Pasinelli, Gilberto; Mayer, Christian; Gouskov, Alexandre; Schiegg, Karin

    2008-06-01

    Large habitat fragments are generally thought to host more species and to offer more diverse and/or better quality habitats than small fragments. However, the importance of small fragments for population dynamics in general and for reproductive performance in particular is highly controversial. Using an information-theoretic approach, we examined reproductive performance and probability of local recruitment of color-banded reed buntings Emberiza schoeniclus in relation to the size of 18 wetland fragments in northeastern Switzerland over 4 years. We also investigated if reproductive performance and recruitment probability were density-dependent. None of the four measures of reproductive performance (laying date, nest failure probability, fledgling production per territory, fledgling condition) nor recruitment probability were found to be related to wetland fragment size. In terms of fledgling production, however, fragment size interacted with year, indicating that small fragments were better reproductive grounds in some years than large fragments. Reproductive performance and recruitment probability were not density-dependent. Our results suggest that small fragments are equally suited as breeding grounds for the reed bunting as large fragments and should therefore be managed to provide a habitat for this and other specialists occurring in the same habitat. Moreover, large fragments may represent sinks in specific years because a substantial percentage of all breeding pairs in our study area breed in large fragments, and reproductive failure in these fragments due to the regularly occurring floods may have a much stronger impact on regional population dynamics than comparable events in small fragments.

  20. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  1. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  2. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  3. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  4. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  5. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  6. Fokker-Planck description of conductance-based integrate-and-fire neuronal networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovacic, Gregor; Tao, Louis; Rangan, Aaditya V.

    2009-08-15

    Steady dynamics of coupled conductance-based integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymptotic approximation for the membrane-potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck description and verified via direct numerical simulations.

  7. Dynamics of Markets

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2009-09-01

    Preface; 1. Econophysics: why and what; 2. Neo-classical economic theory; 3. Probability and stochastic processes; 4. Introduction to financial economics; 5. Introduction to portfolio selection theory; 6. Scaling, pair correlations, and conditional densities; 7. Statistical ensembles: deducing dynamics from time series; 8. Martingale option pricing; 9. FX market globalization: evolution of the dollar to worldwide reserve currency; 10. Macroeconomics and econometrics: regression models vs. empirically based modeling; 11. Complexity; Index.

  8. A computationally efficient ductile damage model accounting for nucleation and micro-inertia at high triaxialities

    DOE PAGES

    Versino, Daniele; Bronkhorst, Curt Allan

    2018-01-31

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  9. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  10. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  11. Weak Measurement and Quantum Smoothing of a Superconducting Qubit

    NASA Astrophysics Data System (ADS)

    Tan, Dian

    In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.

  12. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  13. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  14. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  15. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  16. How likely are constituent quanta to initiate inflation?

    DOE PAGES

    Berezhiani, Lasha; Trodden, Mark

    2015-08-06

    In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less

  17. A hidden Markov model approach to neuron firing patterns.

    PubMed

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-11-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.

  18. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise.

    PubMed

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-02-14

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α -stable (S α S) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf).

  19. Initial Results from SQUID Sensor: Analysis and Modeling for the ELF/VLF Atmospheric Noise

    PubMed Central

    Hao, Huan; Wang, Huali; Chen, Liang; Wu, Jun; Qiu, Longqing; Rong, Liangliang

    2017-01-01

    In this paper, the amplitude probability density (APD) of the wideband extremely low frequency (ELF) and very low frequency (VLF) atmospheric noise is studied. The electromagnetic signals from the atmosphere, referred to herein as atmospheric noise, was recorded by a mobile low-temperature superconducting quantum interference device (SQUID) receiver under magnetically unshielded conditions. In order to eliminate the adverse effect brought by the geomagnetic activities and powerline, the measured field data was preprocessed to suppress the baseline wandering and harmonics by symmetric wavelet transform and least square methods firstly. Then statistical analysis was performed for the atmospheric noise on different time and frequency scales. Finally, the wideband ELF/VLF atmospheric noise was analyzed and modeled separately. Experimental results show that, Gaussian model is appropriate to depict preprocessed ELF atmospheric noise by a hole puncher operator. While for VLF atmospheric noise, symmetric α-stable (SαS) distribution is more accurate to fit the heavy-tail of the envelope probability density function (pdf). PMID:28216590

  20. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  1. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  2. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  3. Comparative study of probability distribution distances to define a metric for the stability of multi-source biomedical research data.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-01-01

    Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.

  4. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  5. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function.

    PubMed

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  6. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  7. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  8. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  9. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    NASA Astrophysics Data System (ADS)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  10. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  11. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  12. Testing critical point universality along the λ-line

    NASA Astrophysics Data System (ADS)

    Nissen, J. A.; Swanson, D. R.; Geng, Z. K.; Dohm, V.; Israelsson, U. E.; DiPirro, M. J.; Lipa, J. A.

    1998-02-01

    We are currently building a prototype for a new test of critical-point universality at the lambda transition in 4He, which is to be performed in microgravity conditions. The flight experiment will measure the second-sound velocity as a function of temperature at pressures from 1 to 30 bars in the region close to the lambda line. The critical exponents and other parameters characterizing the behavior of the superfluid density will be determined from the measurements. The microgravity measurements will be quite extensive, probably taking 30 days to complete. In addition to the superfluid density, some measurements of the specific heat will be made using the low-g simulator at the Jet Propulsion Laboratory. The results of the superfluid density and specific heat measurements will be used to compare the asymptotic exponents and other universal aspects of the superfluid density with the theoretical predictions currently established by renormalization group techniques.

  13. Study of discharge cleaning process in JIPP T-2 Torus by residual gas analyzer

    NASA Astrophysics Data System (ADS)

    Noda, N.; Hirokura, S.; Taniguchi, Y.; Tanahashi, S.

    1982-12-01

    During discharge cleaning, decay time of water vapor pressure changes when the pressure reaches a certain level. A long decay time observed in the later phase can be interpreted as a result of a slow deoxidization rate of chromium oxide, which may dominate the cleaning process in this phase. Optimization of plasma density for the cleaning is discussed comparing the experimental results on density dependence of water vapor pressure with a result based on a zero dimensional calculation for particle balance. One of the essential points for effective cleaning is the raising of the electron density of the plasma high enough that the dissociation loss rate of H2O is as large as the sticking loss rate. A density as high as 10 to the 11th power/cu cm is required for a clean surface condition where sticking probability is presumed to be around 0.5.

  14. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  15. Self-focusing quantum states

    NASA Astrophysics Data System (ADS)

    Villanueva, Anthony Allan D.

    2018-02-01

    We discuss a class of solutions of the time-dependent Schrödinger equation such that the position uncertainty temporarily decreases. This self-focusing or contractive behavior is a consequence of the anti-correlation of the position and momentum observables. Since the associated position density satisfies a continuity equation, upon contraction the probability current at a given fixed point may flow in the opposite direction of the group velocity of the wave packet. For definiteness, we consider a free particle incident from the left of the origin, and establish a condition for the initial position-momentum correlation such that a negative probability current at the origin is possible. This implies a decrease in the particle's detection probability in the region x > 0, and we calculate how long this occurs. Analogous results are obtained for a particle subject to a uniform gravitational force if we consider the particle approaching the turning point. We show that position-momentum anti-correlation may cause a negative probability current at the turning point, leading to a temporary decrease in the particle's detection probability in the classically forbidden region.

  16. SUGGEL: A Program Suggesting the Orbital Angular Momentum of a Neutron Resonance from the Magnitude of its Neutron Width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, S.Y.

    2001-02-02

    The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less

  17. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  18. Profit intensity and cases of non-compliance with the law of demand/supply

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek

    2017-05-01

    We consider properties of the measurement intensity ρ of a random variable for which the probability density function represented by the corresponding Wigner function attains negative values on a part of the domain. We consider a simple economic interpretation of this problem. This model is used to present the applicability of the method to the analysis of the negative probability on markets where there are anomalies in the law of supply and demand (e.g. Giffen's goods). It turns out that the new conditions to optimize the intensity ρ require a new strategy. We propose a strategy (so-called à rebours strategy) based on the fixed point method and explore its effectiveness.

  19. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  20. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  1. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  2. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  3. Local and neighboring patch conditions alter sex-specific movement in banana weevils.

    PubMed

    Carval, Dominique; Perrin, Benjamin; Duyck, Pierre-François; Tixier, Philippe

    2015-12-01

    Understanding the mechanisms underlying the movements and spread of a species over time and space is a major concern of ecology. Here, we assessed the effects of an individual's sex and the density and sex ratio of conspecifics in the local and neighboring environment on the movement probability of the banana weevil, Cosmopolites sordidus. In a "two patches" experiment, we used radiofrequency identification tags to study the C. sordidus movement response to patch conditions. We showed that local and neighboring densities of conspecifics affect the movement rates of individuals but that the density-dependent effect can be either positive or negative depending on the relative densities of conspecifics in local and neighboring patches. We demonstrated that sex ratio also influences the movement of C. sordidus, that is, the weevil exhibits nonfixed sex-biased movement strategies. Sex-biased movement may be the consequence of intrasexual competition for resources (i.e., oviposition sites) in females and for mates in males. We also detected a high individual variability in the propensity to move. Finally, we discuss the role of demographic stochasticity, sex-biased movement, and individual heterogeneity in movement on the colonization process.

  4. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  5. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  6. Statistical Decoupling of a Lagrangian Fluid Parcel in Newtonian Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Szalay, Alex

    2016-03-01

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differential equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.

  7. STATISTICAL DECOUPLING OF A LAGRANGIAN FLUID PARCEL IN NEWTONIAN COSMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Szalay, Alex, E-mail: xwang@cita.utoronto.ca

    The Lagrangian dynamics of a single fluid element within a self-gravitational matter field is intrinsically non-local due to the presence of the tidal force. This complicates the theoretical investigation of the nonlinear evolution of various cosmic objects, e.g., dark matter halos, in the context of Lagrangian fluid dynamics, since fluid parcels with given initial density and shape may evolve differently depending on their environments. In this paper, we provide a statistical solution that could decouple this environmental dependence. After deriving the evolution equation for the probability distribution of the matter field, our method produces a set of closed ordinary differentialmore » equations whose solution is uniquely determined by the initial condition of the fluid element. Mathematically, it corresponds to the projected characteristic curve of the transport equation of the density-weighted probability density function (ρPDF). Consequently it is guaranteed that the one-point ρPDF would be preserved by evolving these local, yet nonlinear, curves with the same set of initial data as the real system. Physically, these trajectories describe the mean evolution averaged over all environments by substituting the tidal tensor with its conditional average. For Gaussian distributed dynamical variables, this mean tidal tensor is simply proportional to the velocity shear tensor, and the dynamical system would recover the prediction of the Zel’dovich approximation (ZA) with the further assumption of the linearized continuity equation. For a weakly non-Gaussian field, the averaged tidal tensor could be expanded perturbatively as a function of all relevant dynamical variables whose coefficients are determined by the statistics of the field.« less

  8. Quantum-shutter approach to tunneling time scales with wave packets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Norifumi; Garcia-Calderon, Gaston; Villavicencio, Jorge

    2005-07-15

    The quantum-shutter approach to tunneling time scales [G. Garcia-Calderon and A. Rubio, Phys. Rev. A 55, 3361 (1997)], which uses a cutoff plane wave as the initial condition, is extended to consider certain type of wave packet initial conditions. An analytical expression for the time-evolved wave function is derived. The time-domain resonance, the peaked structure of the probability density (as the function of time) at the exit of the barrier, originally found with the cutoff plane wave initial condition, is studied with the wave packet initial conditions. It is found that the time-domain resonance is not very sensitive to themore » width of the packet when the transmission process occurs in the tunneling regime.« less

  9. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  10. Reconstructing the deadly eruptive events of 1790 CE at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Swanson, Don; Weaver, Samantha J; Houghton, Bruce F.

    2014-01-01

    A large number of people died during an explosive eruption of Kīlauea Volcano in 1790 CE. Detailed study of the upper part of the Keanakāko‘i Tephra has identified the deposits that may have been responsible for the deaths. Three successive units record shifts in eruption style that agree well with accounts of the eruption based on survivor interviews 46 yr later. First, a wet fall of very fine, accretionary-lapilli–bearing ash created a “cloud of darkness.” People walked across the soft deposit, leaving footprints as evidence. While the ash was still unconsolidated, lithic lapilli fell into it from a high eruption column that was seen from 90 km away. Either just after this tephra fall or during its latest stage, pulsing dilute pyroclastic density currents, probably products of a phreatic eruption, swept across the western flank of Kīlauea, embedding lapilli in the muddy ash and crossing the trail along which the footprints occur. The pyroclastic density currents were most likely responsible for the fatalities, as judged from the reported condition and probable location of the bodies. This reconstruction is relevant today, as similar eruptions will probably occur in the future at Kīlauea and represent its most dangerous and least predictable hazard.

  11. Estimating Lion Abundance using N-mixture Models for Social Species

    PubMed Central

    Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.

    2016-01-01

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283

  12. Estimating Lion Abundance using N-mixture Models for Social Species.

    PubMed

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  13. A hidden Markov model approach to neuron firing patterns.

    PubMed Central

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-01-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581

  14. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  15. Implementation and Research on the Operational Use of the Mesoscale Prediction Model COAMPS in Poland

    DTIC Science & Technology

    2007-09-30

    COAMPS model. Bogumil Jakubiak, University of Warsaw – participated in EGU General Assembly , Vienna Austria 15-20 April 2007 giving one oral and two...conditional forecast (background) error probability density function using an ensemble of the model forecast to generate background error statistics...COAMPS system on ICM machines at Warsaw University for the purpose of providing operational support to the general public using the ICM meteorological

  16. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  17. Entomologic considerations in the study of onchocerciasis transmission.

    PubMed

    Vargas, L; Díaz-Nájera, A

    1980-01-01

    The entomological resources utilized for a better understanding of Onchocerca volvulus transmission are discussed in this paper. Vector density, anthropohilia, gonotrophic cycyle, parous condition longevity and probability of survival in days after the infectious meal are assessed here in order to integrate an overall picture. The concept of vectorial capacity is developed stressing the quantitative aspects. Parasitism of the black-flies by filariae that are doubtfully identified as O. volvulus is also mentioned here.

  18. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  19. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  20. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  1. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  2. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  3. Photochemical escape of oxygen from Mars: constraints from MAVEN in situ measurements

    NASA Astrophysics Data System (ADS)

    Lillis, R. J.; Deighan, J.; Fox, J. L.; Bougher, S. W.; Lee, Y.; Cravens, T.; Rahmati, A.; Mahaffy, P. R.; Andersson, L.; Combi, M. R.; Benna, M.; Jakosky, B. M.; Gröller, H.

    2016-12-01

    One of the primary goals of the MAVEN mission is to characterize rates of atmospheric escape from Mars at the present epoch and relate those escape rates to solar drivers. Photochemical escape of oxygen is expected to be a significant channel for atmospheric loss, particularly in the early solar system when extreme ultraviolet (EUV) fluxes were much higher. We use near-periapsis (<400 km altitude) data from three instruments. The Langmuir Probe and Waves (LPW) instrument measures electron density and temperature, the Suprathermal And Thermal Ion Composition (STATIC) experiment measures ion temperature and the Neutral Gas and Ion Mass Spectrometer (NGIMS) measures neutral and ion densities. For each profile of in situ measurements, we make a series of calculations, each as a function of altitude. The first uses electron and ion temperatures to calculate the probability distribution for initial energies of hot O atoms. The second calculates the probability that a hot atom born at that altitude will escape. The third takes calculates the production rate of the hot O atoms. We then multiply together the profiles of hot atom production and escape probability to get profiles of the production rate of escaping atoms. We integrate with respect to altitude to give us the escape flux of hot oxygen atoms for that periapsis pass. We will present escape fluxes and derived escape rates from the first Mars year of data collected. Total photochemical loss over time is not very useful to calculate from such escape fluxes derived from current conditions because a thicker atmosphere and much higher solar EUV in the past may change the dynamics of escape dramatically. In the future, we intend to use 3-D Monte Carlo models of global atmospheric escape, in concert with our in situ and remote measurements, to fully characterize photochemical escape under current conditions and carefully extrapolate back in time using further simulations with new boundary conditions.

  4. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  5. Quantifying volcanic hazard at Campi Flegrei caldera (Italy) with uncertainty assessment: 2. Pyroclastic density current invasion maps

    NASA Astrophysics Data System (ADS)

    Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano

    2015-04-01

    Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.

  6. [Dynamics of sap flow density in stems of typical desert shrub Calligonum mongolicum and its responses to environmental variables].

    PubMed

    Xu, Shi-qin; Ji, Xi-bin; Jin, Bo-wen

    2016-02-01

    Independent measurements of stem sap flow in stems of Calligonum mongolicum and environmental variables using commercial sap flow gauges and a micrometeorological monitoring system, respectively, were made to simulate the variation of sap flow density in the middle range of Hexi Corridor, Northwest China during June to September, 2014. The results showed that the diurnal process of sap flow density in C. mongolicum showed a broad unimodal change, and the maximum sap flow density reached about 30 minutes after the maximum of photosynthetically active radiation (PAR) , while about 120 minutes before the maximum of temperature and vapor pressure deficit (VPD). During the studying period, sap flow density closely related with atmosphere evapor-transpiration demand, and mainly affected by PAR, temperature and VPD. The model was developed which directly linked the sap flow density with climatic variables, and good correlation between measured and simulated sap flow density was observed in different climate conditions. The accuracy of simulation was significantly improved if the time-lag effect was taken into consideration, while this model underestimated low and nighttime sap flow densities, which was probably caused by plant physiological characteristics.

  7. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  8. Ecohydrology of agroecosystems: probabilistic description of yield reduction risk under limited water availability

    NASA Astrophysics Data System (ADS)

    Vico, Giulia; Porporato, Amilcare

    2013-04-01

    Supplemental irrigation represents one of the main strategies to mitigate the effects of climate variability and stabilize yields. Irrigated agriculture currently provides 40% of food production and its relevance is expected to further increase in the near future, in face of the projected alterations of rainfall patterns and increase in food, fiber, and biofuel demand. Because of the significant investments and water requirements involved in irrigation, strategic choices are needed to preserve productivity and profitability, while maintaining a sustainable water management - a nontrivial task given the unpredictability of the rainfall forcing. To facilitate decision making under uncertainty, a widely applicable probabilistic framework is proposed. The occurrence of rainfall events and irrigation applications are linked probabilistically to crop development during the growing season and yields at harvest. Based on these linkages, the probability density function of yields and corresponding probability density function of required irrigation volumes, as well as the probability density function of yields under the most common case of limited water availability are obtained analytically, as a function of irrigation strategy, climate, soil and crop parameters. The full probabilistic description of the frequency of occurrence of yields and water requirements is a crucial tool for decision making under uncertainty, e.g., via expected utility analysis. Furthermore, the knowledge of the probability density function of yield allows us to quantify the yield reduction hydrologic risk. Two risk indices are defined and quantified: the long-term risk index, suitable for long-term irrigation strategy assessment and investment planning, and the real-time risk index, providing a rigorous probabilistic quantification of the emergence of drought conditions during a single growing season in an agricultural setting. Our approach employs relatively few parameters and is thus easily and broadly applicable to different crops and sites, under current and future climate scenarios. Hence, the proposed probabilistic framework provides a quantitative tool to assess the impact of irrigation strategy and water allocation on the risk of not meeting a certain target yield, thus guiding the optimal allocation of water resources for human and environmental needs.

  9. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  10. Finite entanglement entropy and spectral dimension in quantum gravity

    NASA Astrophysics Data System (ADS)

    Arzano, Michele; Calcagni, Gianluca

    2017-12-01

    What are the conditions on a field theoretic model leading to a finite entanglement entropy density? We prove two very general results: (1) Ultraviolet finiteness of a theory does not guarantee finiteness of the entropy density; (2) If the spectral dimension of the spatial boundary across which the entropy is calculated is non-negative at all scales, then the entanglement entropy cannot be finite. These conclusions, which we verify in several examples, negatively affect all quantum-gravity models, since their spectral dimension is always positive. Possible ways out are considered, including abandoning the definition of the entanglement entropy in terms of the boundary return probability or admitting an analytic continuation (not a regularization) of the usual definition. In the second case, one can get a finite entanglement entropy density in multi-fractional theories and causal dynamical triangulations.

  11. Anisotropic electrical conduction and reduction in dangling-bond density for polycrystalline Si films prepared by catalytic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Niikura, Chisato; Masuda, Atsushi; Matsumura, Hideki

    1999-07-01

    Polycrystalline Si (poly-Si) films with high crystalline fraction and low dangling-bond density were prepared by catalytic chemical vapor deposition (Cat-CVD), often called hot-wire CVD. Directional anisotropy in electrical conduction, probably due to structural anisotropy, was observed for Cat-CVD poly-Si films. A novel method to separately characterize both crystalline and amorphous phases in poly-Si films using anisotropic electrical conduction was proposed. On the basis of results obtained by the proposed method and electron spin resonance measurements, reduction in dangling-bond density for Cat-CVD poly-Si films was achieved using the condition to make the quality of the included amorphous phase high. The properties of Cat-CVD poly-Si films are found to be promising in solar-cell applications.

  12. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  13. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  14. Analyzing time-ordered event data with missed observations.

    PubMed

    Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P

    2017-09-01

    A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p  = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.

  15. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  16. On the origin of heavy-tail statistics in equations of the Nonlinear Schrödinger type

    NASA Astrophysics Data System (ADS)

    Onorato, Miguel; Proment, Davide; El, Gennady; Randoux, Stephane; Suret, Pierre

    2016-09-01

    We study the formation of extreme events in incoherent systems described by the Nonlinear Schrödinger type of equations. We consider an exact identity that relates the evolution of the normalized fourth-order moment of the probability density function of the wave envelope to the rate of change of the width of the Fourier spectrum of the wave field. We show that, given an initial condition characterized by some distribution of the wave envelope, an increase of the spectral bandwidth in the focusing/defocusing regime leads to an increase/decrease of the probability of formation of rogue waves. Extensive numerical simulations in 1D+1 and 2D+1 are also performed to confirm the results.

  17. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  18. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  19. Permeability structure and its influence on microbial activity at off-Shimokita basin, Japan

    NASA Astrophysics Data System (ADS)

    Tanikawa, W.; Yamada, Y.; Sanada, Y.; Kubo, Y.; Inagaki, F.

    2016-12-01

    The microbial populations and the limit of microbial life are probably limited by chemical, physical, and geological conditions, such as temperature, pore water chemistry, pH, and water activity; however, the key parameters affecting growth in deep subseafloor sediments remain unclarified (Hinrichs and Inagaki 2012). IODP expedition 337 was conducted near a continental margin basin off Shimokita Peninsula, Japan to investigate the microbial activity under deep marine coalbed sediments down to 2500 mbsf. Inagaki et al. (2015) discovered that microbial abundance decreased markedly with depth (the lowest cell density of <1 cell/cm3 was recorded below 2000 mbsf), and that the coal bed layers had relatively higher cell densities. In this study, permeability was measured on core samples from IODP Expedition 337 and Expedition CK06-06 in the D/V Chikyu shakedown cruise. Permeability was measured at in-situ effective pressure condition. Permeability was calculated by the steady state flow method by keeping differential pore pressure from 0.1 to 0.8 MPa.Our results show that the permeability for core samples decreases with depth from 10-16 m2 on the seafloor to 10-20 m2 at the bottom of hole. However, permeability is highly scattered within the coal bed unit (1900 to 2000 mbsf). Permeabilities for sandstone and coal is higher than those for siltstone and shale, therefore the scatter of the permeabilities at the same unit is due to the high variation of lithology. The highest permeability was observed in coal samples and this is probably due to formation of micro cracks (cleats). Permeability estimated from the NMR logging using the empirical parameters is around two orders of magnitude higher than permeability of core samples, even though the relative permeability variation at vertical direction is quite similar between core and logging data.The higher cell density is observed in the relatively permeable formation. On the other hand, the correlation between cell density, water activity, and porosity is not clear. On the assumption that pressure gradient is constant through the depth, flow rate can be proportional to permeability of sediments. Flow rate probably restricts the availability of energy and nutrient for microorganism, therefore permeability might have influenced on the microbial activity in the coalbed basin.

  20. Dissipative Particle Dynamics at Isothermal, Isobaric Conditions Using Shardlow-Like Splitting Algorithms

    DTIC Science & Technology

    2013-09-01

    probability density                                         Tk W p VPu m Tk W p VPH p,V i i ij CG ij i

  1. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  2. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  3. Probability density of aperture-averaged irradiance fluctuations for long range free space optical communication links.

    PubMed

    Lyke, Stephen D; Voelz, David G; Roggemann, Michael C

    2009-11-20

    The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.

  4. Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.

    PubMed

    Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe

    2013-04-01

    Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.

  5. Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise

    NASA Astrophysics Data System (ADS)

    Artyushenko, V. M.; Volovach, V. I.

    2018-01-01

    We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.

  6. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  7. The effect of magnetic field on RbCl quantum pseudodot qubit

    NASA Astrophysics Data System (ADS)

    Xiao, Jing-Lin

    2015-07-01

    Under the condition of strong electron-LO-phonon coupling in a RbCl quantum pseudodot (QPD) with an applied magnetic field (MF), the eigenenergies and the eigenfunctions of the ground and the first excited states (GFES) are obtained by using a variational method of the Pekar type (VMPT). A single qubit can be realized in this two-level quantum system. The electron’s probability density oscillates in the RbCl QPD with a certain period of T0 = 7.933 fs when the electron is in the superposition state of the GFES. The results indicate that due to the presence of the asymmetrical structure in the z direction of the RbCl QPD, the electron’s probability density shows double-peak configuration, whereas there is only peak if the confinement is a symmetric structure in the x and y directions of the RbCl QPD. The oscillating period is an increasing function of the cyclotron frequency and the polaron radius, whereas it is a decreasing one of the chemical potential of the two-dimensional electron gas and the zero point of the pseudoharmonic potential (PP).

  8. A generalized electron energy probability function for inductively coupled plasmas under conditions of nonlocal electron kinetics

    NASA Astrophysics Data System (ADS)

    Mouchtouris, S.; Kokkoris, G.

    2018-01-01

    A generalized equation for the electron energy probability function (EEPF) of inductively coupled Ar plasmas is proposed under conditions of nonlocal electron kinetics and diffusive cooling. The proposed equation describes the local EEPF in a discharge and the independent variable is the kinetic energy of electrons. The EEPF consists of a bulk and a depleted tail part and incorporates the effect of the plasma potential, Vp, and pressure. Due to diffusive cooling, the break point of the EEPF is eVp. The pressure alters the shape of the bulk and the slope of the tail part. The parameters of the proposed EEPF are extracted by fitting to measure EEPFs (at one point in the reactor) at different pressures. By coupling the proposed EEPF with a hybrid plasma model, measurements in the gaseous electronics conference reference reactor concerning (a) the electron density and temperature and the plasma potential, either spatially resolved or at different pressure (10-50 mTorr) and power, and (b) the ion current density of the electrode, are well reproduced. The effect of the choice of the EEPF on the results is investigated by a comparison to an EEPF coming from the Boltzmann equation (local electron kinetics approach) and to a Maxwellian EEPF. The accuracy of the results and the fact that the proposed EEPF is predefined renders its use a reliable alternative with a low computational cost compared to stochastic electron kinetic models at low pressure conditions, which can be extended to other gases and/or different electron heating mechanisms.

  9. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  10. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  11. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  12. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, K.S.

    The presence of overpopulation or unsustainable population growth may place pressure on the food and water supplies of countries in sensitive areas of the world. Severe air or water pollution may place additional pressure on these resources. These pressures may generate both internal and international conflict in these areas as nations struggle to provide for their citizens. Such conflicts may result in United States intervention, either unilaterally, or through the United Nations. Therefore, it is in the interests of the United States to identify potential areas of conflict in order to properly train and allocate forces. The purpose of thismore » research is to forecast the probability of conflict in a nation as a function of it s environmental conditions. Probit, logit and ordered probit models are employed to forecast the probability of a given level of conflict. Data from 95 countries are used to estimate the models. Probability forecasts are generated for these 95 nations. Out-of sample forecasts are generated for an additional 22 nations. These probabilities are then used to rank nations from highest probability of conflict to lowest. The results indicate that the dependence of a nation`s economy on agriculture, the rate of deforestation, and the population density are important variables in forecasting the probability and level of conflict. These results indicate that environmental variables do play a role in generating or exacerbating conflict. It is unclear that the United States military has any direct role in mitigating the environmental conditions that may generate conflict. A more important role for the military is to aid in data gathering to generate better forecasts so that the troops are adequntely prepared when conflicts arises.« less

  14. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  15. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  16. Background Noise Characteristics in the Western Part of Romania

    NASA Astrophysics Data System (ADS)

    Grecu, B.; Neagoe, C.; Tataru, D.; Stuart, G.

    2012-04-01

    The seismological database of the western part of Romania increased significantly during the last years, when 33 broadband seismic stations provided by SEIS-UK (10 CMG 40 T's - 30 s, 9 CMG 3T's - 120 s, 14 CMG 6T's - 30 s) were deployed in the western part of the country in July 2009 to operate autonomously for two years. These stations were installed within a joint project (South Carpathian Project - SCP) between University of Leeds, UK and National Institute for Earth Physics (NIEP), Romania that aimed at determining the lithospheric structure and geodynamical evolution of the South Carpathian Orogen. The characteristics of the background seismic noise recorded at the SCP broadband seismic network have been studied in order to identify the variations in background seismic noise as a function of time of day, season, and particular conditions at the stations. Power spectral densities (PSDs) and their corresponding probability density functions (PDFs) are used to characterize the background seismic noise. At high frequencies (> 1 Hz), seismic noise seems to have cultural origin, since notable variations between daytime and nighttime noise levels are observed at most of the stations. The seasonal variations are seen in the microseisms band. The noise levels increase during the winter and autumn months and decrease in summer and spring seasons, while the double-frequency peak shifts from lower periods in summer to longer periods in winter. The analysis of the probability density functions for stations located in different geologic conditions points out that the noise level is higher for stations sited on softer formations than those sited on hard rocks. Finally, the polarization analysis indicates that the main sources of secondary microseisms are found in the Mediterranean Sea and Atlantic Ocean.

  17. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The quotient of normal random variables and application to asset price fat tails

    NASA Astrophysics Data System (ADS)

    Caginalp, Carey; Caginalp, Gunduz

    2018-06-01

    The quotient of random variables with normal distributions is examined and proven to have power law decay, with density f(x) ≃f0x-2, with the coefficient depending on the means and variances of the numerator and denominator and their correlation. We also obtain the conditional probability densities for each of the four quadrants given by the signs of the numerator and denominator for arbitrary correlation ρ ∈ [ - 1 , 1) . For ρ = - 1 we obtain a particularly simple closed form solution for all x ∈ R. The results are applied to a basic issue in economics and finance, namely the density of relative price changes. Classical finance stipulates a normal distribution of relative price changes, though empirical studies suggest a power law at the tail end. By considering the supply and demand in a basic price change model, we prove that the relative price change has density that decays with an x-2 power law. Various parameter limits are established.

  19. Roosevelt elk density in old-growth forests of Olympic National Park

    USGS Publications Warehouse

    Houston, D.B.; Moorhead, Bruce B.; Olson, R.W.

    1987-01-01

    We explored the feasibility of censusing Roosevelt elk from a helicopter in the dense old growth forests of Olympic National Park. WA. Mean observed densities ranged from 8.0-11.6 elk/km2, with coefficients of variation averaging 19.9 percent. A provisional sightability factor of 74 percent suggested that actual mean densities ranged from 10.8-16.0 elk/km2. We conclude that estimates of elk density probably could be refined, but not without a cost and level of disturbance in the park that seem unwarranted at present. The effort required to conduct the 18 counts made during 1985-86 was substantial. For almost every successful count an unsuccessful attempt was made. These included aborting five flights when counting conditions turned sour. Actual counting time for the successful flights was 15.7 and 16.2 hours in 1985 and 1986, respectively. Additional flight time for traveling to and from the census zones, refueling, and aborted attempts added 12.2 and 10.8 hours for the respective years

  20. The ability of individuals to assess population density influences the evolution of emigration propensity and dispersal distance.

    PubMed

    Poethke, Hans Joachim; Gros, Andreas; Hovestadt, Thomas

    2011-08-07

    We analyze the simultaneous evolution of emigration and settlement decisions for actively dispersing species differing in their ability to assess population density. Using an individual-based model we simulate dispersal as a multi-step (patch to patch) movement in a world consisting of habitat patches surrounded by a hostile matrix. Each such step is associated with the same mortality risk. Our simulations show that individuals following an informed strategy, where emigration (and settlement) probability depends on local population density, evolve a lower (natal) emigration propensity but disperse over significantly larger distances - i.e. postpone settlement longer - than individuals performing density-independent emigration. This holds especially when variation in environmental conditions is spatially correlated. Both effects can be traced to the informed individuals' ability to better exploit existing heterogeneity in reproductive chances. Yet, already moderate distance-dependent dispersal costs prevent the evolution of multi-step (long-distance) dispersal, irrespective of the dispersal strategy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  2. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  3. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  4. Process, System, Causality, and Quantum Mechanics: A Psychoanalysis of Animal Faith

    NASA Astrophysics Data System (ADS)

    Etter, Tom; Noyes, H. Pierre

    We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.

  5. Whole-brain grey matter density predicts balance stability irrespective of age and protects older adults from falling.

    PubMed

    Boisgontier, Matthieu P; Cheval, Boris; van Ruitenbeek, Peter; Levin, Oron; Renaud, Olivier; Chanal, Julien; Swinnen, Stephan P

    2016-03-01

    Functional and structural imaging studies have demonstrated the involvement of the brain in balance control. Nevertheless, how decisive grey matter density and white matter microstructural organisation are in predicting balance stability, and especially when linked to the effects of ageing, remains unclear. Standing balance was tested on a platform moving at different frequencies and amplitudes in 30 young and 30 older adults, with eyes open and with eyes closed. Centre of pressure variance was used as an indicator of balance instability. The mean density of grey matter and mean white matter microstructural organisation were measured using voxel-based morphometry and diffusion tensor imaging, respectively. Mixed-effects models were built to analyse the extent to which age, grey matter density, and white matter microstructural organisation predicted balance instability. Results showed that both grey matter density and age independently predicted balance instability. These predictions were reinforced when the level of difficulty of the conditions increased. Furthermore, grey matter predicted balance instability beyond age and at least as consistently as age across conditions. In other words, for balance stability, the level of whole-brain grey matter density is at least as decisive as being young or old. Finally, brain grey matter appeared to be protective against falls in older adults as age increased the probability of losing balance in older adults with low, but not moderate or high grey matter density. No such results were observed for white matter microstructural organisation, thereby reinforcing the specificity of our grey matter findings. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian

    2009-07-15

    Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less

  7. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  8. Excited atoms in the free-burning Ar arc: treatment of the resonance radiation

    NASA Astrophysics Data System (ADS)

    Golubovskii, Yu; Kalanov, D.; Gortschakow, S.; Baeva, M.; Uhrlandt, D.

    2016-11-01

    The collisional-radiative model with an emphasis on the accurate treatment of the resonance radiation transport is developed and applied to the free-burning Ar arc plasma. This model allows for analysis of the influence of resonance radiation on the spatial density profiles of the atoms in different excited states. The comparison of the radial density profiles obtained using an effective transition probability approximation with the results of the accurate solution demonstrates the distinct impact of transport on the profiles and absolute densities of the excited atoms, especially in the arc fringes. The departures from the Saha-Boltzmann equilibrium distributions, caused by different radiative transitions, are analyzed. For the case of the DC arc, the local thermodynamic equilibrium (LTE) state holds close to the arc axis, while strong deviations from the equilibrium state on the periphery occur. In the intermediate radial positions the conditions of partial LTE are fulfilled.

  9. Application of the coral health chart to determine bleaching status of Acropora downingi in a subtropical coral reef

    NASA Astrophysics Data System (ADS)

    Oladi, Mahshid; Shokri, Mohammad Reza; Rajabi-Maham, Hassan

    2017-06-01

    The `Coral Health Chart' has become a popular tool for monitoring coral bleaching worldwide. The scleractinian coral Acropora downingi (Wallace 1999) is highly vulnerable to temperature anomalies in the Persian Gulf. Our study tested the reliability of Coral Health Chart scores for the assessment of bleaching-related changes in the mitotic index (MI) and density of zooxanthellae cells in A. downingi in Qeshm Island, the Persian Gulf. The results revealed that, at least under severe conditions, it can be used as an effective proxy for detecting changes in the density of normal, transparent, or degraded zooxanthellae and MI. However, its ability to discern changes in pigment concentration and total zooxanthellae density should be viewed with some caution in the Gulf region, probably because the high levels of environmental variability in this region result in inherent variations in the characteristics of zooxanthellae among "healthy" looking corals.

  10. The role of the density gradient on intermittent cross-field transport events in a simple magnetized toroidal plasma

    NASA Astrophysics Data System (ADS)

    Theiler, C.; Diallo, A.; Fasoli, A.; Furno, I.; Labit, B.; Podestà, M.; Poli, F. M.; Ricci, P.

    2008-04-01

    Intermittent cross-field particle transport events (ITEs) are studied in the basic toroidal device TORPEX [TORoidal Plasma EXperiment, A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], with focus on the role of the density gradient. ITEs are due to the intermittent radial elongation of an interchange mode. The elongating positive wave crests can break apart and form blobs. This is not necessary, however, for plasma particles to be convected a considerable distance across the magnetic field lines. Conditionally sampled data reveal two different scenarios leading to ITEs. In the first case, the interchange mode grows radially from a slab-like density profile and leads to the ITE. A novel analysis technique reveals a monotonic dependence between the vertically averaged inverse radial density scale length and the probability for a subsequent ITE. In the second case, the mode is already observed before the start of the ITE. It does not elongate radially in a first stage, but at a later time. It is shown that this elongation is preceded by a steepening of the density profile as well.

  11. Environmental and Cultural Impact. Proposed Tennessee Colony Reservoir, Trinity River, Texas. Volume II. Appendices A, B and C.

    DTIC Science & Technology

    1972-01-01

    in a considerable increase in the water salinity . Density stratification of the water is possible, the denser saline water forming the lower water...layer in the lake basin which causes anaerobic, toxic conditions. Salinity causes also an increase in flocculation and sedi- mentation of clay minerals...increased salin - ity of the lake water. Excessive water seepage in the fault zone intersecting the northern most part of the reservoir will probably

  12. Modeling of turbulent chemical reaction

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1995-01-01

    Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.

  13. SEMICONDUCTOR PHYSICS: Properties of the two- and three-dimensional quantum dot qubit

    NASA Astrophysics Data System (ADS)

    Shihua, Chen

    2010-05-01

    On the condition of electric-longitudinal-optical (LO) phonon strong coupling in both two- and three-dimensional parabolic quantum dots (QDs), we obtain the eigenenergies of the ground state (GS) and the first excited state (ES), the eigenfunctions of the GS and the first ES by using a variational method of Pekar type. This system in QD may be employed as a quantum system-quantum bit (qubit). When the electron is in the superposition state of the GS and the first ES, we obtain the time evolution of the electron density. The relations of both the electron probability density and the period of oscillation with the electric-LO phonon coupling strength and confinement length are discussed.

  14. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  15. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  16. Landuse legacies and small streams: Identifying relationships between historical land use and contemporary stream conditions

    USGS Publications Warehouse

    Maloney, K.O.; Feminella, J.W.; Mitchell, R.M.; Miller, S.A.; Mulholland, P.J.; Houser, J.N.

    2008-01-01

    The concept of landscape legacies has been examined extensively in terrestrial ecosystems and has led to a greater understanding of contemporary ecosystem processes. However, although stream ecosystems are tightly coupled with their catchments and, thus, probably are affected strongly by historical catchment conditions, few studies have directly examined the importance of landuse legacies on streams. We examined relationships between historical land use (1944) and contemporary (2000-2003) stream physical, chemical, and biological conditions after accounting for the influences of contemporary land use (1999) and natural landscape (catchment size) variation in 12 small streams at Fort Benning, Georgia, USA. Most stream variables showed strong relationships with contemporary land use and catchment size; however, after accounting for these factors, residual variation in many variables remained significantly related to historical land use. Residual variation in benthic particulate organic matter, diatom density, % of diatoms in Eunotia spp., fish density in runs, and whole-stream gross primary productivity correlated negatively, whereas streamwater pH correlated positively, with residual variation in fraction of disturbed land in catchments in 1944 (i.e., bare ground and unpaved road cover). Residual variation in % recovering land (i.e., early successional vegetation) in 1944 was correlated positively with residual variation in streambed instability, a macroinvertebrate biotic index, and fish richness, but correlated negatively with residual variation in most benthic macroinvertebrate metrics examined (e.g., Chironomidae and total richness, Shannon diversity). In contrast, residual variation in whole-stream respiration rates was not explained by historical land use. Our results suggest that historical land use continues to influence important physical and chemical variables in these streams, and in turn, probably influences associated biota. Beyond providing insight into biotic interactions and their associations with environmental conditions, identification of landuse legacies also will improve understanding of stream impairment in contemporary minimally disturbed catchments, enabling more accurate assessment of reference conditions in studies of biotic integrity and restoration. ?? 2008 by The North American Benthological Society.

  17. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  18. Stochastic characteristics and Second Law violations of atomic fluids in Couette flow

    NASA Astrophysics Data System (ADS)

    Raghavan, Bharath V.; Karimi, Pouyan; Ostoja-Starzewski, Martin

    2018-04-01

    Using Non-equilibrium Molecular Dynamics (NEMD) simulations, we study the statistical properties of an atomic fluid undergoing planar Couette flow, in which particles interact via a Lennard-Jones potential. We draw a connection between local density contrast and temporal fluctuations in the shear stress, which arise naturally through the equivalence between the dissipation function and entropy production according to the fluctuation theorem. We focus on the shear stress and the spatio-temporal density fluctuations and study the autocorrelations and spectral densities of the shear stress. The bispectral density of the shear stress is used to measure the degree of departure from a Gaussian model and the degree of nonlinearity induced in the system owing to the applied strain rate. More evidence is provided by the probability density function of the shear stress. We use the Information Theory to account for the departure from Gaussian statistics and to develop a more general probability distribution function that captures this broad range of effects. By accounting for negative shear stress increments, we show how this distribution preserves the violations of the Second Law of Thermodynamics observed in planar Couette flow of atomic fluids, and also how it captures the non-Gaussian nature of the system by allowing for non-zero higher moments. We also demonstrate how the temperature affects the band-width of the shear-stress and how the density affects its Power Spectral Density, thus determining the conditions under which the shear-stress acts is a narrow-band or wide-band random process. We show that changes in the statistical characteristics of the parameters of interest occur at a critical strain rate at which an ordering transition occurs in the fluid causing shear thinning and affecting its stability. A critical strain rate of this kind is also predicted by the Loose-Hess stability criterion.

  19. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  20. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  1. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  2. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  3. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  4. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  5. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  6. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  7. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  8. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  9. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  10. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  11. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  12. Does red noise increase or decrease extinction risk? Single extreme events versus series of unfavorable conditions.

    PubMed

    Schwager, Monika; Johst, Karin; Jeltsch, Florian

    2006-06-01

    Recent theoretical studies have shown contrasting effects of temporal correlation of environmental fluctuations (red noise) on the risk of population extinction. It is still debated whether and under which conditions red noise increases or decreases extinction risk compared with uncorrelated (white) noise. Here, we explain the opposing effects by introducing two features of red noise time series. On the one hand, positive autocorrelation increases the probability of series of poor environmental conditions, implying increasing extinction risk. On the other hand, for a given time period, the probability of at least one extremely bad year ("catastrophe") is reduced compared with white noise, implying decreasing extinction risk. Which of these two features determines extinction risk depends on the strength of environmental fluctuations and the sensitivity of population dynamics to these fluctuations. If extreme (catastrophic) events can occur (strong noise) or sensitivity is high (overcompensatory density dependence), then temporal correlation decreases extinction risk; otherwise, it increases it. Thus, our results provide a simple explanation for the contrasting previous findings and are a crucial step toward a general understanding of the effect of noise color on extinction risk.

  13. Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model

    PubMed Central

    McElderry, Robert M.

    2017-01-01

    Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I have included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the preferred model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. PMID:28505138

  14. Beak condition and cage density determine abundance and spatial distribution of northern fowl mites, Ornithonyssus sylviarum, and chicken body lice, Menacanthus stramineus, on caged laying hens.

    PubMed

    Mullens, B A; Chen, B L; Owen, J P

    2010-12-01

    Adult White Leghorn hens (Hy-Line strain W-36) were inoculated with either northern fowl mites or chicken body lice, and the ectoparasite populations were monitored over periods of 9 to 16 wk. Two beak conditions (beak trimmed or beak intact) and 2 housing densities (1 or 2 hens per 25 × 31 cm suspended wire cage) were tested. Populations of both ectoparasites were at least 10 times lower on beak-intact hens compared with populations on beak-trimmed hens. Cage density did not influence mite numbers, but higher numbers of lice (2 to 3 times) developed on hens held at the higher cage density. Louse distribution on the body and louse population age structure were also influenced by host beak condition. Beak-intact hens had a higher proportion of lice under the wings, whereas beak-trimmed hens had the majority of lice on the lower abdomen. Louse populations on beak-trimmed hens also comprised relatively more immature stages than populations found on beak-intact hens. The effects are likely related to decreased grooming efficiency by beak-trimmed hens and, in the case of lice, the higher host density. The high mite and louse populations on most commercial caged laying hens are probably a direct result of beak trimming. However, selection of more docile breeds that can be held without trimming may allow the hens themselves to reduce ectoparasites below economically damaging levels. This could benefit producers, animal welfare advocates, and human health by reducing 1) costs of beak trimming, 2) pesticide treatment costs (including human and bird chemical exposure concerns), and 3) objections to beak trimming from the animal welfare community.

  15. Flight and ground tests of a very low density elastomeric ablative material

    NASA Technical Reports Server (NTRS)

    Olsen, G. C.; Chapman, A. J., III

    1972-01-01

    A very low density ablative material, a silicone-phenolic composite, was flight tested on a recoverable spacecraft launched by a Pacemaker vehicle system; and, in addition, it was tested in an arc heated wind tunnel at three conditions which encompassed most of the reentry heating conditions of the flight tests. The material was composed, by weight, of 71 percent phenolic spheres, 22.8 percent silicone resin, 2.2 percent catalyst, and 4 percent silica fibers. The tests were conducted to evaluate the ablator performance in both arc tunnel and flight tests and to determine the predictability of the albator performance by using computed results from an existing one-dimensional numerical analysis. The flight tested ablator experienced only moderate surface recession and retained a smooth surface except for isolated areas where the char was completely removed, probably following reentry and prior to or during recovery. Analytical results show good agreement between arc tunnel and flight test results. The thermophysical properties used in the analysis are tabulated.

  16. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  17. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  18. Concepts and Bounded Rationality: An Application of Niestegge's Approach to Conditional Quantum Probabilities

    NASA Astrophysics Data System (ADS)

    Blutner, Reinhard

    2009-03-01

    Recently, Gerd Niestegge developed a new approach to quantum mechanics via conditional probabilities developing the well-known proposal to consider the Lüders-von Neumann measurement as a non-classical extension of probability conditionalization. I will apply his powerful and rigorous approach to the treatment of concepts using a geometrical model of meaning. In this model, instances are treated as vectors of a Hilbert space H. In the present approach there are at least two possibilities to form categories. The first possibility sees categories as a mixture of its instances (described by a density matrix). In the simplest case we get the classical probability theory including the Bayesian formula. The second possibility sees categories formed by a distinctive prototype which is the superposition of the (weighted) instances. The construction of prototypes can be seen as transferring a mixed quantum state into a pure quantum state freezing the probabilistic characteristics of the superposed instances into the structure of the formed prototype. Closely related to the idea of forming concepts by prototypes is the existence of interference effects. Such inference effects are typically found in macroscopic quantum systems and I will discuss them in connection with several puzzles of bounded rationality. The present approach nicely generalizes earlier proposals made by authors such as Diederik Aerts, Andrei Khrennikov, Ricardo Franco, and Jerome Busemeyer. Concluding, I will suggest that an active dialogue between cognitive approaches to logic and semantics and the modern approach of quantum information science is mandatory.

  19. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  20. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  1. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  2. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  3. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  4. Functional response and population dynamics for fighting predator, based on activity distribution.

    PubMed

    Garay, József; Varga, Zoltán; Gámez, Manuel; Cabello, Tomás

    2015-03-07

    The classical Holling type II functional response, describing the per capita predation as a function of prey density, was modified by Beddington and de Angelis to include interference of predators that increases with predator density and decreases the number of killed prey. In the present paper we further generalize the Beddington-de Angelis functional response, considering that all predator activities (searching and handling prey, fight and recovery) have time duration, the probabilities of predator activities depend on the encounter probabilities, and hence on the prey and predator abundance, too. Under these conditions, the aim of the study is to introduce a functional response for fighting the predator and to analyse the corresponding dynamics, when predator-predator-prey encounters also occur. From this general approach, the Holling type functional responses can also be obtained as particular cases. In terms of the activity distribution, we give biologically interpretable sufficient conditions for stable coexistence. We consider two-individual (predator-prey) and three-individual (predator-predator-prey) encounters. In the three-individual encounter model there is a relatively higher fighting rate and a lower killing rate. Using numerical simulation, we surprisingly found that when the intrinsic prey growth rate and the conversion rate are small enough, the equilibrium predator abundance is higher in the three-individual encounter case. The above means that, when the equilibrium abundance of the predator is small, coexistence appears first in the three-individual encounter model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Non-Linear Interactions between Consumers and Flow Determine the Probability of Plant Community Dominance on Maine Rocky Shores

    PubMed Central

    Silliman, Brian R.; McCoy, Michael W.; Trussell, Geoffrey C.; Crain, Caitlin M.; Ewanchuk, Patrick J.; Bertness, Mark D.

    2013-01-01

    Although consumers can strongly influence community recovery from disturbance, few studies have explored the effects of consumer identity and density and how they may vary across abiotic gradients. On rocky shores in Maine, recent experiments suggest that recovery of plant- or animal- dominated community states is governed by rates of water movement and consumer pressure. To further elucidate the mechanisms of consumer control, we examined the species-specific and density-dependent effects of rocky shore consumers (crabs and snails) on community recovery under both high (mussel dominated) and low flow (plant dominated) conditions. By partitioning the direct impacts of predators (crabs) and grazers (snails) on community recovery across a flow gradient, we found that grazers, but not predators, are likely the primary agent of consumer control and that their impact is highly non-linear. Manipulating snail densities revealed that herbivorous and bull-dozing snails (Littorina littorea) alone can control recovery of high and low flow communities. After ∼1.5 years of recovery, snail density explained a significant amount of the variation in macroalgal coverage at low flow sites and also mussel recovery at high flow sites. These density-dependent grazer effects were were both non-linear and flow-dependent, with low abundance thresholds needed to suppress plant community recovery, and much higher levels needed to control mussel bed development. Our study suggests that consumer density and identity are key in regulating both plant and animal community recovery and that physical conditions can determine the functional forms of these consumer effects. PMID:23940510

  6. Shape fabric development in rigid clast populations under pure shear: The influence of no-slip versus slip boundary conditions

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Meere, Patrick A.

    2015-09-01

    Shape fabrics of elliptical objects in rocks are usually assumed to develop by passive behavior of inclusions with respect to the surrounding material leading to shape-based strain analysis methods belonging to the Rf/ϕ family. A probability density function is derived for the orientational characteristics of populations of rigid ellipses deforming in a pure shear 2D deformation with both no-slip and slip boundary conditions. Using maximum likelihood a numerical method is developed for estimating finite strain in natural populations deforming for both mechanisms. Application to a natural example indicates the importance of the slip mechanism in explaining clast shape fabrics in deformed sediments.

  7. Stochastic modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Fox, R. O.; Hill, J. C.; Gao, F.; Moser, R. D.; Rogers, M. M.

    1992-01-01

    Direct numerical simulations of a single-step irreversible chemical reaction with non-premixed reactants in forced isotropic turbulence at R(sub lambda) = 63, Da = 4.0, and Sc = 0.7 were made using 128 Fourier modes to obtain joint probability density functions (pdfs) and other statistical information to parameterize and test a Fokker-Planck turbulent mixing model. Preliminary results indicate that the modeled gradient stretching term for an inert scalar is independent of the initial conditions of the scalar field. The conditional pdf of scalar gradient magnitudes is found to be a function of the scalar until the reaction is largely completed. Alignment of concentration gradients with local strain rate and other features of the flow were also investigated.

  8. The Last Interglacial Labrador Sea: A Pervasive Millennial Oscillation In Surface Water Conditions Without Labrador Sea Water Formation

    NASA Astrophysics Data System (ADS)

    Hillaire-Marcel, C.; de Vernal, A.

    A multi-proxy approach was developed to document secular to millenial changes of potential density in surface, mesopelagic, and bottom waters of the Labrador Sea, thus allowing to reconstruct situations when winter convection with intermediate or deep water formation occurred in the basin. This approach relies on dinocyst-transfer functions providing estimates of sea-surface temperature and salinity that are used to calibrate past-relationships between oxygen 18 contents in calcite and potential density gradients. The oxygen isotope compositions of epipelagic (Globigerina bul- loides), deeper-dwelling (Neogloboquadrina pachyderma, left coiling), and benthic (Uvigerina peregrina and Cibicides wuellerstorfi) foraminifera, then allow to extrap- olate density gradients between the corresponding water layers. This approach has been tested in surface sediments in reference to modern hydrographic conditions at several sites from the NW North Atlantic, then used to reconstruct past conditions from high resolution studies of cores raised from the southern Greenland Rise (off Cape Farewell). Results indicate that the modern-like regime established during the early Holocene and full developed after 7 ka only. It is marked by weak density gradi- ents between the surface and intermediate water masses, allowing winter convection down to a lower pycnocline between intermediate and deep-water masses, thus the formation of intermediate Labrador Sea Water (LSW). Contrasting with the middle to late Holocene situation, since the last interglacial and throughout the last climatic cycle, a single and dense water mass seems to have occupied the water column below a generally low-density surface water layer, thus preventing deep convection. There- fore, the production of LSW seems to be feature specific to the present interglacial interval that could soon cease to exist, due to global warming, as suggested by recent ocean model experiments and by the fact that it never occurred during the last inter- glacial. We think that the mechanism for the eventual shut-down in LSW formation involves an enhanced freshwater export from the Arctic into the Labrador Sea, as a consequence of both an enhanced hydrological cycle in a warmer mean climate, and a lesser sea-ice extend in the Canadian Arctic Archipelago. Both the last interglacial and the Holocene depict large amplitude millenial oscillations in surface water conditions and in density gradients with the underlying water mass. During the last 11 ka, six 1 of these oscillations are recorded, and those that occurred since ca. 7 ka BP probably resulted in large amplitude changes in LSW-production rate. These oscillations pos- sibly correspond to the Holocene "pervasive millennial cycle" observed by Bond and others in a few North Atlantic records. We hypothesize that they are related to sea ice conditions in the Arctic Ocean and to the relative routing of outflowing freshwaters through either the Canadian Arctic Archipelago or Fram Strait, into the North Atlantic. These oscillations would probably maintain after an eventual collapse of LSW forma- tion, as suggested by the last interglacial reconstructions, but their impact on future thermohaline circulation in the North Atlantic is unclear. 2

  9. Multi-level biological responses in Ucides cordatus (Linnaeus, 1763) (Brachyura, Ucididae) as indicators of conservation status in mangrove areas from the western atlantic.

    PubMed

    Duarte, Luis Felipe de Almeida; Souza, Caroline Araújo de; Nobre, Caio Rodrigues; Pereira, Camilo Dias Seabra; Pinheiro, Marcelo Antonio Amaro

    2016-11-01

    There is a global lack of knowledge on tropical ecotoxicology, particularly in terms of mangrove areas. These areas often serve as nurseries or homes for several animal species, including Ucides cordatus (the uçá crab). This species is widely distributed, is part of the diet of human coastal communities, and is considered to be a sentinel species due to its sensitivity to toxic xenobiotics in natural environments. Sublethal damages to benthic populations reveal pre-pathological conditions, but discussions of the implications are scarce in the literature. In Brazil, the state of São Paulo offers an interesting scenario for ecotoxicology and population studies: it is easy to distinguish between mangroves that are well preserved and those which are significantly impacted by human activity. The objectives of this study were to provide the normal baseline values for the frequency of Micronucleated cells (MN‰) and for neutral red retention time (NRRT) in U. cordatus at pristine locations, as well to indicate the conservation status of different mangrove areas using a multi-level biological response approach in which these biomarkers and population indicators (condition factor and crab density) are applied in relation to environmental quality indicators (determined via information in the literature and solid waste volume). A mangrove area with no effects of impact (areas of reference or pristine areas) presented a mean value of MN‰<3 and NRRT>120min, values which were assumed as baseline values representing genetic and physiological normality. A significant correlation was found between NRRT and MN, with both showing similar and effective results for distinguishing between different mangrove areas according to conservation status. Furthermore, crab density was lower in more impacted mangrove areas, a finding which also reflects the effects of sublethal damage; this finding was not determined by condition factor measurements. Multi-level biological responses were able to reflect the conservation status of the mangrove areas studied using information on guideline values of MN‰, NRRT, and density of the uçá crab in order to categorize three levels of human impacts in mangrove areas: PNI (probable null impact); PLI (probable low impact); and PHI (probable high impact). Results confirm the success of U. cordatus species' multi-level biological responses in diagnosing threats to mangrove areas. Therefore, this species represents an effective tool in studies on mangrove conservation statuses in the Western Atlantic. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  11. Optimum space shuttle launch times relative to natural environment

    NASA Technical Reports Server (NTRS)

    King, R. L.

    1977-01-01

    Three sets of meteorological criteria were analyzed to determine the probabilities of favorable launch and landing conditions. Probabilities were computed for every 3 hours on a yearly basis using 14 years of weather data. These temporal probability distributions, applicable to the three sets of weather criteria encompassing benign, moderate and severe weather conditions, were computed for both Kennedy Space Center (KSC) and Edwards Air Force Base. In addition, conditional probabilities were computed for unfavorable weather conditions occurring after a delay which may or may not be due to weather conditions. Also, for KSC, the probabilities of favorable landing conditions at various times after favorable launch conditions have prevailed have been computed so that mission probabilities may be more accurately computed for those time periods when persistence strongly correlates weather conditions. Moreover, the probabilities and conditional probabilities of the occurrence of both favorable and unfavorable events for each individual criterion were computed to indicate the significance of each weather element to the overall result.

  12. Effect of gas filling pressure and operation energy on ion and neutron emission in a medium energy plasma focus device

    NASA Astrophysics Data System (ADS)

    Niranjan, Ram; Rout, R. K.; Srivastava, Rohit; Kaushik, T. C.

    2018-03-01

    The effects of gas filling pressure and operation energy on deuterium ions and neutrons have been studied in a medium energy plasma focus device, MEPF-12. The deuterium gas filling pressure was varied from 1 to 10 mbar at an operation energy of 9.7 kJ. Also, the operation energy was varied from 3.9 to 9.7 kJ at a deuterium gas filling pressure of 4 mbar. Time resolved emission of deuterium ions was measured using a Faraday cup. Simultaneously, time integrated and time resolved emissions of neutrons were measured using a silver activation detector and plastic scintillator detector, respectively. Various characteristics (fluence, peak density, and most probable energy) of deuterium ions were estimated using the Faraday cup signal. The fluence was found to be nearly independent of the gas filling pressure and operation energy, but the peak density and most probable energy of deuterium ions were found to be varying. The neutron yield was observed to be varying with the gas filling pressure and operation energy. The effect of ions on neutrons emission was observed at each operation condition.

  13. The effects of vent location, event scale and time forecasts on pyroclastic density current hazard maps at Campi Flegrei caldera (Italy)

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano

    2017-09-01

    This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.

  14. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  15. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  16. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  17. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  18. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  19. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  20. Site specific probability of passive acoustic detection of humpback whale calls from single fixed hydrophones.

    PubMed

    Helble, Tyler A; D'Spain, Gerald L; Hildebrand, John A; Campbell, Gregory S; Campbell, Richard L; Heaney, Kevin D

    2013-09-01

    Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. A common mistake in the analysis of marine mammal acoustic data is formulating conclusions about these animals without first understanding how environmental properties such as bathymetry, sediment properties, water column sound speed, and ocean acoustic noise influence the detection and character of vocalizations in the acoustic data. The approach in this paper is to use Monte Carlo simulations with a full wave field acoustic propagation model to characterize the site specific probability of detection of six types of humpback whale calls at three passive acoustic monitoring locations off the California coast. Results show that the probability of detection can vary by factors greater than ten when comparing detections across locations, or comparing detections at the same location over time, due to environmental effects. Effects of uncertainties in the inputs to the propagation model are also quantified, and the model accuracy is assessed by comparing calling statistics amassed from 24,690 humpback units recorded in the month of October 2008. Under certain conditions, the probability of detection can be estimated with uncertainties sufficiently small to allow for accurate density estimates.

  1. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  2. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  3. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  4. ECOLOGICAL DETERMINANTS OF AVIAN INFLUENZA VIRUS, WEST NILE VIRUS, AND AVIAN PARAMYXOVIRUS INFECTION AND ANTIBODY STATUS IN BLUE-WINGED TEAL (ANAS DISCORS) IN THE CANADIAN PRAIRIES.

    PubMed

    Nallar, Rodolfo; Papp, Zsuzsanna; Leighton, Frederick A; Epp, Tasha; Pasick, John; Berhane, Yohannes; Lindsay, Robbin; Soos, Catherine

    2016-01-01

    The Canadian prairies are one of the most important breeding and staging areas for migratory waterfowl in North America. Hundreds of thousands of waterfowl of numerous species from multiple flyways converge in and disperse from this region annually; therefore this region may be a key area for potential intra- and interspecific spread of infectious pathogens among migratory waterfowl in the Americas. Using Blue-winged Teal (Anas discors, BWTE), which have the most extensive migratory range among waterfowl species, we investigated ecologic risk factors for infection and antibody status to avian influenza virus (AIV), West Nile virus (WNV), and avian paramyxovirus-1 (APMV-1) in the three prairie provinces (Alberta, Saskatchewan, and Manitoba) prior to fall migration. We used generalized linear models to examine infection or evidence of exposure in relation to host (age, sex, body condition, exposure to other infections), spatiotemporal (year, province), population-level (local population densities of BWTE, total waterfowl densities), and environmental (local pond densities) factors. The probability of AIV infection in BWTE was associated with host factors (e.g., age and antibody status), population-level factors (e.g., local BWTE population density), and year. An interaction between age and AIV antibody status showed that hatch year birds with antibodies to AIV were more likely to be infected, suggesting an antibody response to an active infection. Infection with AIV was positively associated with local BWTE density, supporting the hypothesis of density-dependent transmission. The presence of antibodies to WNV and APMV-1 was positively associated with age and varied among years. Furthermore, the probability of being WNV antibody positive was positively associated with pond density rather than host population density, likely because ponds provide suitable breeding habitat for mosquitoes, the primary vectors for transmission. Our findings highlight the importance of spatiotemporal, environmental, and host factors at the individual and population levels, all of which may influence dynamics of these and other viruses in wild waterfowl populations.

  5. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  6. Memory effects on a resonate-and-fire neuron model subjected to Ornstein-Uhlenbeck noise

    NASA Astrophysics Data System (ADS)

    Paekivi, S.; Mankin, R.; Rekker, A.

    2017-10-01

    We consider a generalized Langevin equation with an exponentially decaying memory kernel as a model for the firing process of a resonate-and-fire neuron. The effect of temporally correlated random neuronal input is modeled as Ornstein-Uhlenbeck noise. In the noise-induced spiking regime of the neuron, we derive exact analytical formulas for the dependence of some statistical characteristics of the output spike train, such as the probability distribution of the interspike intervals (ISIs) and the survival probability, on the parameters of the input stimulus. Particularly, on the basis of these exact expressions, we have established sufficient conditions for the occurrence of memory-time-induced transitions between unimodal and multimodal structures of the ISI density and a critical damping coefficient which marks a dynamical transition in the behavior of the system.

  7. Seasonal trends in eDNA detection and occupancy of bigheaded carps

    USGS Publications Warehouse

    Erickson, Richard A.; Merkes, Christopher; Jackson, Craig; Goforth, Reuben; Amberg, Jon J.

    2017-01-01

    Bigheaded carps, which include silver and bighead carp, are threatening to invade the Great Lakes. These species vary seasonally in distribution and abundance due to environmental conditions such as precipitation and temperature. Monitoring this seasonal movement is important for management to control the population size and spread of the species. We examined if environmental DNA (eDNA) approaches could detect seasonal changes of these species. To do this, we developed a novel genetic marker that was able to both detect and differentiate bighead and silver carp DNA. We used the marker, combined with a novel occupancy model, to study the occurrence of bigheaded carps at 3 sites on the Wabash River over the course of a year. We studied the Wabash River because of concerns that carps may be able to use the system to invade the Great Lakes via a now closed (ca. 2017) connection at Eagle Marsh between the Wabash River's watershed and the Great Lakes' watershed. We found seasonal trends in the probability of detection and occupancy that varied across sites. These findings demonstrate that eDNA methods can detect seasonal changes in bigheaded carps densities and suggest that the amount of eDNA present changes seasonally. The site that was farthest upstream and had the lowest carp densities exhibited the strongest seasonal trends for both detection probabilities and sample occupancy probabilities. Furthermore, other observations suggest that carps seasonally leave this site, and we were able to detect this with our eDNA approach.

  8. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  9. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  10. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  11. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  12. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  13. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  14. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  15. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  16. Five willow varieties cultivated across diverse field environments reveal stem density variation associated with high tension wood abundance

    PubMed Central

    Berthod, Nicolas; Brereton, Nicholas J. B.; Pitre, Frédéric E.; Labrecque, Michel

    2015-01-01

    Sustainable and inexpensive production of biomass is necessary to make biofuel production feasible, but represents a challenge. Five short rotation coppice willow cultivars, selected for high biomass yield, were cultivated on sites at four diverse regions of Quebec in contrasting environments. Wood composition and anatomical traits were characterized. Tree height and stem diameter were measured to evaluate growth performance of the cultivars according to the diverse pedoclimatic conditions. Each cultivar showed very specific responses to its environment. While no significant variation in lignin content was observed between sites, there was variation between cultivars. Surprisingly, the pattern of substantial genotype variability in stem density was maintained across all sites. However, wood anatomy did differ between sites in a cultivar (producing high and low density wood), suggesting a probable response to an abiotic stress. Furthermore, twice as many cellulose-rich G-fibers, comprising over 50% of secondary xylem, were also found in the high density wood, a finding with potential to bring higher value to the lignocellulosic bioethanol industry. PMID:26583024

  17. Habitat and Burrow System Characteristics of the Blind Mole Rat Spalax galili in an Area of Supposed Sympatric Speciation

    PubMed Central

    Lövy, Matěj; Šklíba, Jan; Hrouzková, Ema; Dvořáková, Veronika; Nevo, Eviatar; Šumbera, Radim

    2015-01-01

    A costly search for food in subterranean rodents resulted in various adaptations improving their foraging success under given ecological conditions. In Spalax ehrenbergi superspecies, adaptations to local ecological conditions can promote speciation, which was recently supposed to occur even in sympatry at sites where two soil types of contrasting characteristics abut each other. Quantitative description of ecological conditions in such a site has been, nevertheless, missing. We measured characteristics of food supply and soil within 16 home ranges of blind mole rats Spalax galili in an area subdivided into two parts formed by basaltic soil and pale rendzina. We also mapped nine complete mole rat burrow systems to compare burrowing patterns between the soil types. Basaltic soil had a higher food supply and was harder than rendzina even under higher moisture content and lower bulk density. Population density of mole rats was five-times lower in rendzina, possibly due to the lower food supply and higher cover of Sarcopoterium shrubs which seem to be avoided by mole rats. A combination of food supply and soil parameters probably influences burrowing patterns resulting in shorter and more complex burrow systems in basaltic soil. PMID:26192762

  18. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  19. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  20. A Stochastic Super-Exponential Growth Model for Population Dynamics

    NASA Astrophysics Data System (ADS)

    Avila, P.; Rekker, A.

    2010-11-01

    A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.

  1. Hyperons in the nuclear pasta phase

    NASA Astrophysics Data System (ADS)

    Menezes, Débora P.; Providência, Constança

    2017-10-01

    We have investigated under which conditions hyperons (particularly Λ s and Σ-s ) can be found in the nuclear pasta phase. As the density and temperature are larger and the electron fraction is smaller, the probability is greater that these particles appear, but always in very small amounts. Λ hyperons only occur in gas and in smaller amounts than would occur if matter were homogeneous, never with abundancies above 10-5. The amount of Σ- in the gas is at least two orders of magnitude smaller and can be disregarded in practical calculations.

  2. Structural analysis of vibroacoustical processes

    NASA Technical Reports Server (NTRS)

    Gromov, A. P.; Myasnikov, L. L.; Myasnikova, Y. N.; Finagin, B. A.

    1973-01-01

    The method of automatic identification of acoustical signals, by means of the segmentation was used to investigate noises and vibrations in machines and mechanisms, for cybernetic diagnostics. The structural analysis consists of presentation of a noise or vibroacoustical signal as a sequence of segments, determined by the time quantization, in which each segment is characterized by specific spectral characteristics. The structural spectrum is plotted as a histogram of the segments, also as a relation of the probability density of appearance of a segment to the segment type. It is assumed that the conditions of ergodic processes are maintained.

  3. Theory of earthquakes interevent times applied to financial markets

    NASA Astrophysics Data System (ADS)

    Jagielski, Maciej; Kutner, Ryszard; Sornette, Didier

    2017-10-01

    We analyze the probability density function (PDF) of waiting times between financial loss exceedances. The empirical PDFs are fitted with the self-excited Hawkes conditional Poisson process with a long power law memory kernel. The Hawkes process is the simplest extension of the Poisson process that takes into account how past events influence the occurrence of future events. By analyzing the empirical data for 15 different financial assets, we show that the formalism of the Hawkes process used for earthquakes can successfully model the PDF of interevent times between successive market losses.

  4. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  5. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  6. How the climate limits the wood density of angiosperms

    NASA Astrophysics Data System (ADS)

    Choi, Jin Woo; Kim, Ho-Young

    2017-11-01

    Flowering trees have various types of wood structure to perform multiple functions under their environmental conditions. In addition to transporting water from the roots to the canopy and providing mechanical support, the structure should provide resistance to embolism to maintain soil-plant-atmosphere continuum. By investigating existing data of the resistivity to embolism and wood density of 165 angiosperm species, here we show that the climate can limit the intrinsic properties of trees. Trees living in the dry environments require a high wood density to slow down the pressure decrease as it loses water relatively fast by evaporation. However, building too much tissues will result in the decrease of hydraulic conductivity and moisture concentration around mesophyll cells. To rationalize the biologically observed lower bound of the wood density, we construct a mechanical model to predict the wood density as a function of the vulnerability to embolism and the time for the recovery. Also, we build an artificial system using hydrogel microchannels that can test the probability of embolism as a function of conduit distributions. Our theoretical prediction is shown to be consistent with the results obtained from the artificial system and the biological data.

  7. Evaluation of the significance of radiographic density and size of calculi in the incidence and clinical manifestations of postlithotripsy renal hematomas.

    PubMed

    Orozco Fariñas, Rodolfo; Iglesias Prieto, José Ignacio; Massarrah Halabi, Jorge; Mancebo Gómez, José María; Pérez-Castro Ellendt, Enrique

    2011-01-01

    The present study is a continuation of an earlier article published on the incidence, clinical manifestations, treatment and risk factors associated with postlithotripsy renal hematomas (1). To assess the possible influence of the size and radiodensity of kidney stones on the incidence and clinical behavior of renal postlithotripsy hematomas. Observational prospective study of 324 renal units in the same number of patients undergoing extracorporeal renal lithotripsy. The variables "calculus size" and "radiographic calculus density" were evaluaArch. ted statistically by means of the IPSS 0.15 program on the basis of 42 postlithotripsy hematomas diagnosed and grouped according to their clinical behavior. Higher incidence of hematomas was observed in hiperdense calculi (25%) versus medium density calculi (7,4%), this difference was significant in the asymptomatic hematoma group. Calculus size was unrelated to the incidence of renal hematoma, but there was a significant association between renal hematoma and radiographic calculus density, probably due to the relation of radiographic density to chemical composition and, ultimately, to hardness and ultrastructure. Ultrastructure is yet another factor, among others, to be taken into account as a potential conditioning factor for this complication.

  8. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  9. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  10. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  11. Students' Understanding of Conditional Probability on Entering University

    ERIC Educational Resources Information Center

    Reaburn, Robyn

    2013-01-01

    An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…

  12. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  13. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  14. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  15. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  16. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  17. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  18. Condition factor variations over time and trophic position among four species of Characidae from Amazonian floodplain lakes: effects of an anomalous drought.

    PubMed

    Tribuzy-Neto, I A; Conceição, K G; Siqueira-Souza, F K; Hurd, L E; Freitas, C E C

    2018-05-01

    The effects of extreme droughts on freshwater fish remain unknown worldwide. In this paper, we estimated the condition factor, a measure of relative fitness based on the relationship of body weight to length, in four fish species representing two trophic levels (omnivores and piscivores) from Amazonian floodplain lakes for three consecutive years: 2004, 2005 (an anomalous drought year), and 2006. The two omnivores, Colossoma macropomum and Mylossoma duriventre, exhibited trends consistent with their life cycles in 2004 and 2006: high values during the hydrologic seasons of high water, receding water, and low water, with a drop following reproduction following the onset of rising water. However during the drought year of 2005 the condition factor was much lower than normal during receding and low water seasons, probably as a result of an abnormal reduction in resource availability in a reduced habitat. The two piscivorous piranhas, Serrasalmus spilopleura and S. elongatus, maintained relatively stable values of condition factor over the hydrologic cycles of all three years, with no apparent effect of the drought, probably because the reduction in habitat is counterbalanced by the resulting increase in relative prey density. We suggest that if predictions of increasing drought in the Amazon are correct, predatory species may benefit, at least in the short run, while omnivores may be negatively affected.

  19. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  20. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  1. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  2. CPROB: A COMPUTATIONAL TOOL FOR CONDUCTING CONDITIONAL PROBABILITY ANALYSIS

    EPA Science Inventory

    Conditional probability analysis measures the probability of observing one event given that another event has occurred. In an environmental context, conditional probability analysis helps assess the association between an environmental contaminant (i.e. the stressor) and the ec...

  3. Coupling of link- and node-ordering in the coevolving voter model.

    PubMed

    Toruniewska, J; Kułakowski, K; Suchecki, K; Hołyst, J A

    2017-10-01

    We consider the process of reaching the final state in the coevolving voter model. There is a coevolution of state dynamics, where a node can copy a state from a random neighbor with probabilty 1-p and link dynamics, where a node can rewire its link to another node of the same state with probability p. That exhibits an absorbing transition to a frozen phase above a critical value of rewiring probability. Our analytical and numerical studies show that in the active phase mean values of magnetization of nodes n and links m tend to the same value that depends on initial conditions. In a similar way mean degrees of spins up and spins down become equal. The system obeys a special statistical conservation law since a linear combination of both types magnetizations averaged over many realizations starting from the same initial conditions is a constant of motion: Λ≡(1-p)μm(t)+pn(t)=const., where μ is the mean node degree. The final mean magnetization of nodes and links in the active phase is proportional to Λ while the final density of active links is a square function of Λ. If the rewiring probability is above a critical value and the system separates into disconnected domains, then the values of nodes and links magnetizations are not the same and final mean degrees of spins up and spins down can be different.

  4. Ecology of a Maryland population of black rat snakes (Elaphe o. obsoleta)

    USGS Publications Warehouse

    Stickel, L.F.; Stickel, W.H.; Schmid, F.C.

    1980-01-01

    Behavior, growth and age of black rat snakes under natural conditions were investigated by mark-recapture methods at the Patuxent Wildlife Research Center for 22 years (1942-1963), with limited observations for 13 more years (1964-1976). Over the 35-year period, 330 snakes were recorded a total of 704 times. Individual home ranges remained stable for many years; male ranges averaged at least 600 m in diam and female ranges at least 500 m, each including a diversity of habitats, evidenced also in records of foods. Population density was low, probably less than 0.5 snake/ha. Peak activity of both sexes was in May and June, with a secondary peak in September. Large trees in the midst of open areas appeared to serve a significant functional role in the behavioral life pattern of the snake population. Male combat was observed three times in the field. Male snakes grew more rapidly than females, attained larger sizes and lived longer. Some individuals of both sexes probably lived 20 years or more. Weight-length relationships changed as the snakes grew and developed heavier bodies in proportion to length. Growth apparently continued throughout life. Some individuals, however, both male and female, stopped growing for periods of I or 2 years and then resumed, a condition probably related to poor health, suggested by skin ailments.

  5. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  6. Variability in growth/no growth boundaries of 188 different Escherichia coli strains reveals that approximately 75% have a higher growth probability under low pH conditions than E. coli O157:H7 strain ATCC 43888.

    PubMed

    Haberbeck, L U; Oliveira, R C; Vivijs, B; Wenseleers, T; Aertsen, A; Michiels, C; Geeraerd, A H

    2015-02-01

    This study investigated the variation in growth/no growth boundaries of 188 Escherichia coli strains. Experiments were conducted in Luria-Bertani media under 36 combinations of lactic acid (LA) (0 and 25 mM), pH (3.8, 3.9, 4.0, 4.1, 4.2 and 4.3 for 0 mM LA and 4.3, 4.4, 4.5, 4.6, 4.7 and 4.8 for 25 mM LA) and temperature (20, 25 and 30 °C). After 3 days of incubation, growth was monitored through optical density measurements. For each strain, a so-called purposeful selection approach was used to fit a logistic regression model that adequately predicted the likelihood for growth. Further, to assess the growth/no growth variability for all the strains at once, a generalized linear mixed model was fitted to the data. Strain was fitted as a fixed factor and replicate as a random blocking factor. E. coli O157:H7 strain ATCC 43888 was used as reference strain allowing a comparison with the other strains. Out of the 188 strains tested, 140 strains (∼75%) presented a significantly higher probability of growth under low pH conditions than the O157:H7 strain ATCC 43888, whereas 20 strains (∼11%) showed a significantly lower probability of growth under high pH conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. CONSTRAINTS ON THE PHYSICAL PROPERTIES OF MAIN BELT COMET P/2013 R3 FROM ITS BREAKUP EVENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Masatoshi; Sánchez, Diego Paul; Gabriel, Travis

    2014-07-01

    Jewitt et al. recently reported that main belt comet P/2013 R3 experienced a breakup, probably due to rotational disruption, with its components separating on mutually hyperbolic orbits. We propose a technique for constraining physical properties of the proto-body, especially the initial spin period and cohesive strength, as a function of the body's estimated size and density. The breakup conditions are developed by combining mutual orbit dynamics of the smaller components and the failure condition of the proto-body. Given a proto-body with a bulk density ranging from 1000 kg m{sup –3} to 1500 kg m{sup –3} (a typical range of the bulk density of C-type asteroids),more » we obtain possible values of the cohesive strength (40-210 Pa) and the initial spin state (0.48-1.9 hr). From this result, we conclude that although the proto-body could have been a rubble pile, it was likely spinning beyond its gravitational binding limit and would have needed cohesive strength to hold itself together. Additional observations of P/2013 R3 will enable stronger constraints on this event, and the present technique will be able to give more precise estimates of its internal structure.« less

  8. Study on conditional probability of surface rupture: effect of fault dip and width of seismogenic layer

    NASA Astrophysics Data System (ADS)

    Inoue, N.

    2017-12-01

    The conditional probability of surface ruptures is affected by various factors, such as shallow material properties, process of earthquakes, ground motions and so on. Toda (2013) pointed out difference of the conditional probability of strike and reverse fault by considering the fault dip and width of seismogenic layer. This study evaluated conditional probability of surface rupture based on following procedures. Fault geometry was determined from the randomly generated magnitude based on The Headquarters for Earthquake Research Promotion (2017) method. If the defined fault plane was not saturated in the assumed width of the seismogenic layer, the fault plane depth was randomly provided within the seismogenic layer. The logistic analysis was performed to two data sets: surface displacement calculated by dislocation methods (Wang et al., 2003) from the defined source fault, the depth of top of the defined source fault. The estimated conditional probability from surface displacement indicated higher probability of reverse faults than that of strike faults, and this result coincides to previous similar studies (i.e. Kagawa et al., 2004; Kataoka and Kusakabe, 2005). On the contrary, the probability estimated from the depth of the source fault indicated higher probability of thrust faults than that of strike and reverse faults, and this trend is similar to the conditional probability of PFDHA results (Youngs et al., 2003; Moss and Ross, 2011). The probability of combined simulated results of thrust and reverse also shows low probability. The worldwide compiled reverse fault data include low fault dip angle earthquake. On the other hand, in the case of Japanese reverse fault, there is possibility that the conditional probability of reverse faults with less low dip angle earthquake shows low probability and indicates similar probability of strike fault (i.e. Takao et al., 2013). In the future, numerical simulation by considering failure condition of surface by the source fault would be performed in order to examine the amount of the displacement and conditional probability quantitatively.

  9. The formation conditions of chondrules and chondrites

    USGS Publications Warehouse

    Alexander, C.M. O'D.; Grossman, Jeffrey N.; Ebel, D.S.; Ciesla, F.J.

    2008-01-01

    Chondrules, which are roughly millimeter-sized silicate-rich spherules, dominate the most primitive meteorites, the chondrites. They formed as molten droplets and, judging from their abundances in chondrites, are the products of one of the most energetic processes that operated in the early inner solar system. The conditions and mechanism of chondrule formation remain poorly understood. Here we show that the abundance of the volatile element sodium remained relatively constant during chondrule formation. Prevention of the evaporation of sodium requires that chondrules formed in regions with much higher solid densities than predicted by known nebular concentration mechanisms. These regions would probably have been self-gravitating. Our model explains many other chemical characteristics of chondrules and also implies that chondrule and planetesimal formation were linked.

  10. Conservational PDF Equations of Turbulence

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2010-01-01

    Recently we have revisited the traditional probability density function (PDF) equations for the velocity and species in turbulent incompressible flows. They are all unclosed due to the appearance of various conditional means which are modeled empirically. However, we have observed that it is possible to establish a closed velocity PDF equation and a closed joint velocity and species PDF equation through conditions derived from the integral form of the Navier-Stokes equations. Although, in theory, the resulted PDF equations are neither general nor unique, they nevertheless lead to the exact transport equations for the first moment as well as all higher order moments. We refer these PDF equations as the conservational PDF equations. This observation is worth further exploration for its validity and CFD application

  11. Decaying two-dimensional turbulence in a circular container.

    PubMed

    Schneider, Kai; Farge, Marie

    2005-12-09

    We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.

  12. The Interaction Between the Magnetosphere of Mars and that of Comet Siding Spring

    NASA Astrophysics Data System (ADS)

    Holmstrom, M.; Futaana, Y.; Barabash, S. V.

    2015-12-01

    On 19 October 2014 the comet Siding Spring flew by Mars. This was a unique opportunity to study the interaction between a cometary and a planetary magnetosphere. Here we model the magnetosphere of the comet using a hybrid plasma solver (ions as particles, electrons as a fluid). The undisturbed upstream solar wind ion conditions are estimated from observations by ASPERA-3/IMA on Mars Express during several orbits. It is found that Mars probably passed through a solar wind that was disturbed by the comet during the flyby. The uncertainty derives from that the size of the disturbed solar wind region in the comet simulation is sensitive to the assumed upstream solar wind conditions, especially the solar wind proton density.

  13. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  14. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  15. Demographics and density estimates of two three-toed box turtle (Terrapene carolina triunguis) populations within forest and restored prairie sites in central Missouri.

    PubMed

    O'Connor, Kelly M; Rittenhouse, Chadwick D; Millspaugh, Joshua J; Rittenhouse, Tracy A G

    2015-01-01

    Box turtles (Terrapene carolina) are widely distributed but vulnerable to population decline across their range. Using distance sampling, morphometric data, and an index of carapace damage, we surveyed three-toed box turtles (Terrapene carolina triunguis) at 2 sites in central Missouri, and compared differences in detection probabilities when transects were walked by one or two observers. Our estimated turtle densities within forested cover was less at the Thomas S. Baskett Wildlife Research and Education Center, a site dominated by eastern hardwood forest (d = 1.85 turtles/ha, 95% CI [1.13, 3.03]) than at the Prairie Fork Conservation Area, a site containing a mix of open field and hardwood forest (d = 4.14 turtles/ha, 95% CI [1.99, 8.62]). Turtles at Baskett were significantly older and larger than turtles at Prairie Fork. Damage to the carapace did not differ significantly between the 2 populations despite the more prevalent habitat management including mowing and prescribed fire at Prairie Fork. We achieved improved estimates of density using two rather than one observer at Prairie Fork, but negligible differences in density estimates between the two methods at Baskett. Error associated with probability of detection decreased at both sites with the addition of a second observer. We provide demographic data on three-toed box turtles that suggest the use of a range of habitat conditions by three-toed box turtles. This case study suggests that habitat management practices and their impacts on habitat composition may be a cause of the differences observed in our focal populations of turtles.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  17. Predictions of the causal entropic principle for environmental conditions of the universe

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Frey, Andrew R.; Holder, Gilbert

    2008-03-01

    The causal entropic principle has been proposed as an alternative to the anthropic principle for understanding the magnitude of the cosmological constant. In this approach, the probability to create observers is assumed to be proportional to the entropy production ΔS in a maximal causally connected region—the causal diamond. We improve on the original treatment by better quantifying the entropy production due to stars, using an analytic model for the star formation history which accurately accounts for changes in cosmological parameters. We calculate the dependence of ΔS on the density contrast Q=δρ/ρ, and find that our universe is much closer to the most probable value of Q than in the usual anthropic approach and that probabilities are relatively weakly dependent on this amplitude. In addition, we make first estimates of the dependence of ΔS on the baryon fraction and overall matter abundance. Finally, we also explore the possibility that decays of dark matter, suggested by various observed gamma ray excesses, might produce a comparable amount of entropy to stars.

  18. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  19. Contribution of Proton Capture Reactions to the Ascertained Abundance of Fluorine in the Evolved Stars of Globular Cluster M4, M22, 47 Tuc and NGC 6397

    NASA Astrophysics Data System (ADS)

    Mahanta, Upakul; Goswami, Aruna; Duorah, H. L.; Duorah, K.

    2017-12-01

    The origin of the abundance pattern and also the (anti)correlation present among the elements found in stars of globular clusters (GCs) remains unimproved until date. The proton-capture reactions are presently recognised in concert of the necessary candidates for that sort of observed behaviour in the second generation stars. We tend to propose a reaction network of a nuclear cycle namely carbon-nitrogen-oxygen-fluorine (CNOF) at evolved stellar condition since fluorine (^{19}F) is one such element which gets plagued by proton capture reactions. The stellar temperature thought about here ranges from 2× 107 to 10× 107 K and there has been an accretion occuring, with material density being 102 g/cm3 and 103 g/cm3. Such kind of temperature density conditions are probably going to be prevailing within the H-burning shell of evolved stars. The estimated abundances of ^{19}F are then matched with the info that has been determined for a few some metal-poor giants of GC M4, M22, 47 Tuc as well as NGC 6397. As far as the comparison between the observed and calculated abundances is concerned, it is found that the abundance of ^{19}F have shown an excellent agreement with the observed abundances with a correlation coefficent above 0.9, supporting the incidence of that nuclear cycle at the adopted temperature density conditions.

  20. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  1. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  2. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  3. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  4. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  5. Identification of cloud fields by the nonparametric algorithm of pattern recognition from normalized video data recorded with the AVHRR instrument

    NASA Astrophysics Data System (ADS)

    Protasov, Konstantin T.; Pushkareva, Tatyana Y.; Artamonov, Evgeny S.

    2002-02-01

    The problem of cloud field recognition from the NOAA satellite data is urgent for solving not only meteorological problems but also for resource-ecological monitoring of the Earth's underlying surface associated with the detection of thunderstorm clouds, estimation of the liquid water content of clouds and the moisture of the soil, the degree of fire hazard, etc. To solve these problems, we used the AVHRR/NOAA video data that regularly displayed the situation in the territory. The complexity and extremely nonstationary character of problems to be solved call for the use of information of all spectral channels, mathematical apparatus of testing statistical hypotheses, and methods of pattern recognition and identification of the informative parameters. For a class of detection and pattern recognition problems, the average risk functional is a natural criterion for the quality and the information content of the synthesized decision rules. In this case, to solve efficiently the problem of identifying cloud field types, the informative parameters must be determined by minimization of this functional. Since the conditional probability density functions, representing mathematical models of stochastic patterns, are unknown, the problem of nonparametric reconstruction of distributions from the leaning samples arises. To this end, we used nonparametric estimates of distributions with the modified Epanechnikov kernel. The unknown parameters of these distributions were determined by minimization of the risk functional, which for the learning sample was substituted by the empirical risk. After the conditional probability density functions had been reconstructed for the examined hypotheses, a cloudiness type was identified using the Bayes decision rule.

  6. Predicting coastal cliff erosion using a Bayesian probabilistic model

    USGS Publications Warehouse

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  7. A new approach to the problem of bulk-mediated surface diffusion.

    PubMed

    Berezhkovskii, Alexander M; Dagdug, Leonardo; Bezrukov, Sergey M

    2015-08-28

    This paper is devoted to bulk-mediated surface diffusion of a particle which can diffuse both on a flat surface and in the bulk layer above the surface. It is assumed that the particle is on the surface initially (at t = 0) and at time t, while in between it may escape from the surface and come back any number of times. We propose a new approach to the problem, which reduces its solution to that of a two-state problem of the particle transitions between the surface and the bulk layer, focusing on the cumulative residence times spent by the particle in the two states. These times are random variables, the sum of which is equal to the total observation time t. The advantage of the proposed approach is that it allows for a simple exact analytical solution for the double Laplace transform of the conditional probability density of the cumulative residence time spent on the surface by the particle observed for time t. This solution is used to find the Laplace transform of the particle mean square displacement and to analyze the peculiarities of its time behavior over the entire range of time. We also establish a relation between the double Laplace transform of the conditional probability density and the Fourier-Laplace transform of the particle propagator over the surface. The proposed approach treats the cases of both finite and infinite bulk layer thicknesses (where bulk-mediated surface diffusion is normal and anomalous at asymptotically long times, respectively) on equal footing.

  8. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  9. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  10. Habitat relationships and nest site characteristics of cavity-nesting birds in cottonwood floodplains

    USGS Publications Warehouse

    Sedgwick, James A.; Knopf, Fritz L.

    1990-01-01

    We examined habitat relationships and nest site characteristics for 6 species of cavity-nesting birds--American kestrel (Falco sparverius), northern flicker (Colaptes auratus), red-headed woodpecker (Melanerpes erythrocephalus), black-capped chickadee (Parus atricapillus), house wren (Troglodytes aedon), and European starling (Sturnus vulgaris)--in a mature plains cottonwood (Populus sargentii) bottomland along the South Platte River in northeastern Colorado in 1985 and 1986. We examined characteristics of cavities, nest trees, and the habitat surrounding nest trees. Density of large trees (>69 cm dbh), total length of dead limbs ≥10 cm diameter (TDLL), and cavity density were the most important habitat variables; dead limb length (DLL), dbh, and species were the most important tree variables; and cavity height, cavity entrance diameter, and substrate condition at the cavity (live vs. dead) were the most important cavity variables in segregating cavity nesters along habitat, tree, and cavity dimensions, respectively. Random sites differed most from cavity-nesting bird sites on the basis of dbh, DLL, limb tree density (trees with ≥1 m dead limbs ≥10 cm diameter), and cavity density. Habitats of red-headed woodpeckers and American kestrels were the most unique, differing most from random sites. Based on current trends in cottonwood demography, densities of cavity-nesting birds will probably decline gradually along the South Platte River, paralleling a decline in DLL, limb tree density, snag density, and the concurrent lack of cottonwood regeneration.

  11. In Search of Sleep Biomarkers of Alzheimer’s Disease: K-Complexes Do Not Discriminate between Patients with Mild Cognitive Impairment and Healthy Controls

    PubMed Central

    Reda, Flaminia; Gorgoni, Maurizio; Lauri, Giulia; Truglia, Ilaria; Cordone, Susanna; Scarpelli, Serena; Mangiaruga, Anastasia; D’Atri, Aurora; Ferrara, Michele; Lacidogna, Giordano; Marra, Camillo; Rossini, Paolo Maria; De Gennaro, Luigi

    2017-01-01

    The K-complex (KC) is one of the hallmarks of Non-Rapid Eye Movement (NREM) sleep. Recent observations point to a drastic decrease of spontaneous KCs in Alzheimer’s disease (AD). However, no study has investigated when, in the development of AD, this phenomenon starts. The assessment of KC density in mild cognitive impairment (MCI), a clinical condition considered a possible transitional stage between normal cognitive function and probable AD, is still lacking. The aim of the present study was to compare KC density in AD/MCI patients and healthy controls (HCs), also assessing the relationship between KC density and cognitive decline. Twenty amnesic MCI patients underwent a polysomnographic recording of a nocturnal sleep. Their data were compared to those of previously recorded 20 HCs and 20 AD patients. KCs during stage 2 NREM sleep were visually identified and KC densities of the three groups were compared. AD patients showed a significant KC density decrease compared with MCI patients and HCs, while no differences were observed between MCI patients and HCs. KC density was positively correlated with Mini-Mental State Examination (MMSE) scores. Our results point to the existence of an alteration of KC density only in a full-blown phase of AD, which was not observable in the early stage of the pathology (MCI), but linked with cognitive deterioration. PMID:28468235

  12. Non-Fickian dispersion of groundwater age

    PubMed Central

    Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.

    2014-01-01

    We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651

  13. Electrochemical oxidation of ampicillin antibiotic at boron-doped diamond electrodes and process optimization using response surface methodology.

    PubMed

    Körbahti, Bahadır K; Taşyürek, Selin

    2015-03-01

    Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.

  14. Effects of shifts in the rate of repetitive stimulation on sustained attention

    NASA Technical Reports Server (NTRS)

    Krulewitz, J. E.; Warm, J. S.; Wohl, T. H.

    1975-01-01

    The effects of shifts in the rate of presentation of repetitive neutral events (background event rate) were studied in a visual vigilance task. Four groups of subjects experienced either a high (21 events/min) or a low (6 events/min) event rate for 20 min and then experienced either the same or the alternate event rate for an additional 40 min. The temporal occurrence of critical target signals was identical for all groups, irrespective of event rate. The density of critical signals was 12 signals/20 min. By the end of the session, shifts in event rate were associated with changes in performance which resembled contrast effects found in other experimental situations in which shift paradigms were used. Relative to constant event rate control conditions, a shift from a low to a high event rate depressed the probability of signal detections, while a shift in the opposite direction enhanced the probability of signal detections.

  15. Fusion competent synaptic vesicles persist upon active zone disruption and loss of vesicle docking

    PubMed Central

    Wang, Shan Shan H.; Held, Richard G.; Wong, Man Yan; Liu, Changliang; Karakhanyan, Aziz; Kaeser, Pascal S.

    2016-01-01

    In a nerve terminal, synaptic vesicle docking and release are restricted to an active zone. The active zone is a protein scaffold that is attached to the presynaptic plasma membrane and opposed to postsynaptic receptors. Here, we generated conditional knockout mice removing the active zone proteins RIM and ELKS, which additionally led to loss of Munc13, Bassoon, Piccolo, and RIM-BP, indicating disassembly of the active zone. We observed a near complete lack of synaptic vesicle docking and a strong reduction in vesicular release probability and the speed of exocytosis, but total vesicle numbers, SNARE protein levels, and postsynaptic densities remained unaffected. Despite loss of the priming proteins Munc13 and RIM and of docked vesicles, a pool of releasable vesicles remained. Thus, the active zone is necessary for synaptic vesicle docking and to enhance release probability, but releasable vesicles can be localized distant from the presynaptic plasma membrane. PMID:27537483

  16. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  17. Opinion evolution and rare events in an open community

    NASA Astrophysics Data System (ADS)

    Ye, Yusong; Yang, Zhuoqin; Zhang, Zili

    2016-11-01

    There are many multi-stable phenomena in society. To explain these multi-stable phenomena, we have studied opinion evolution in an open community. We focus on probability of transition (or the mean transition time) that the system transfer from one state to another. We suggest a bistable model to provide an interpretation of these phenomena. The quasi-potential method that we used is the most important method to calculate the transition time and it can be used to determine the whole probability density. We study the condition of bistability and then discuss rare events in a multi-stable system. In our model, we find that two parameters, ;temperature; and ;persuading intensity,; influence the behavior of the system; a suitable ;persuading intensity; and low ;temperature; make the system more stable. This means that the transition rarely happens. The asymmetric phenomenon caused by ;public-opinion; is also discussed.

  18. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  19. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  20. Two proposed convergence criteria for Monte Carlo solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less

  1. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  2. Population ecology of the mallard: II. Breeding habitat conditions, size of the breeding populations, and production indices

    USGS Publications Warehouse

    Pospahala, Richard S.; Anderson, David R.; Henny, Charles J.

    1974-01-01

    This report, the second in a series on a comprehensive analysis of mallard population data, provides information on mallard breeding habitat, the size and distribution of breeding populations, and indices to production. The information in this report is primarily the result of large-scale aerial surveys conducted during May and July, 1955-73. The history of the conflict in resource utilization between agriculturalists and wildlife conservation interests in the primary waterfowl breeding grounds is reviewed. The numbers of ponds present during the breeding season and the midsummer period and the effects of precipitation and temperature on the number of ponds present are analyzed in detail. No significant cycles in precipitation were detected and it appears that precipitation is primarily influenced by substantial seasonal and random components. Annual estimates (1955-73) of the number of mallards in surveyed and unsurveyed breeding areas provided estimates of the size and geographic distribution of breeding mallards in North America. The estimated size of the mallard breeding population in North America has ranged from a high of 14.4 million in 1958 to a low of 7.1 million in 1965. Generally, the mallard breeding population began to decline after the 1958 peak until 1962, and remained below 10 million birds until 1970. The decline and subsequent low level of the mallard population between 1959 and 1969 .generally coincided with a period of poor habitat conditions on the major breeding grounds. The density of mallards was highest in the Prairie-Parkland Area with an average of nearly 19.2 birds per square mile. The proportion of the continental mallard breeding population in the Prairie-Parkland Area ranged from 30% in 1962 to a high of 600/0 in 1956. The geographic distribution of breeding mallards throughout North America was significantly related to the number of May ponds in the Prairie-Parkland Area. Estimates of midsummer habitat conditions and indices to production from the July Production Survey were studied in detail. Several indices relating to production showed marked declines from west to east in the Prairie-Parkland Area, these are: (1) density of breeding mallards (per square mile and per May pond), (2) brood density (per square mile and per July pond), (3) average brood size (all species combined), and (4) brood survival from class II to class III. An index to late nesting and renesting efforts was highest during years when midsummer water conditions were good. Production rates of many ducks breeding in North America appear to be regulated by both density-dependent and density-independent factors. Spacing of birds in the Prairie-Parkland Area appeared to be a key factor in the density-dependent regulation of the population. The spacing mechanism, in conjunction with habitat conditions, influenced some birds to overfly the primary breeding grounds into less favorable habitats to the north and northwest where the production rate may be suppressed. The production rate of waterfowl in the Prairie Parkland Area seems to be independent of density (after emigration has taken place) because the production index appears to be a linear function of the number of breeding birds in the area. Similarly, the production rate of waterfowl in northern Saskatchewan and northern Manitoba appeared to be independent of density. Production indices in these northern areas appear to be a linear function of the size of the breeding population. Thus, the density and distribution of breeding ducks is probably regulated through a spacing mechanism that is at least partially dependent on measurable environmental factors. The result is a density-dependent process operating to ultimately effect the production and production rate of breeding ducks on a continent-wide basis. Continental production, and therefore the size of the fall population, is probably partially regulated by the number of birds that are distributed north and northwest into environments less favorable for successful reproduction. Thus, spacing of the birds in the Prairie-Parkland Area and the movement of a fraction of the birds out of the prime breeding areas may be key factors in the density-dependent regulation of the total mallard population.

  3. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  4. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  5. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  7. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  8. A simple probabilistic model of initiation of motion of poorly-sorted granular mixtures subjected to a turbulent flow

    NASA Astrophysics Data System (ADS)

    Ferreira, Rui M. L.; Ferrer-Boix, Carles; Hassan, Marwan

    2015-04-01

    Initiation of sediment motion is a classic problem of sediment and fluid mechanics that has been studied at wide range of scales. By analysis at channel scale one means the investigation of a reach of a stream, sufficiently large to encompass a large number of sediment grains but sufficiently small not to experience important variations in key hydrodynamic variables. At this scale, and for poorly-sorted hydraulically rough granular beds, existing studies show a wide variation of the value of the critical Shields parameter. Such uncertainty constitutes a problem for engineering studies. To go beyond Shields paradigm for the study of incipient motion at channel scale this problem can be can be cast in probabilistic terms. An empirical probability of entrainment, which will naturally account for size-selective transport, can be calculated at the scale of the bed reach, using a) the probability density functions (PDFs) of the flow velocities {{f}u}(u|{{x}n}) over the bed reach, where u is the flow velocity and xn is the location, b) the PDF of the variability of competent velocities for the entrainment of individual particles, {{f}{{up}}}({{u}p}), where up is the competent velocity, and c) the concept of joint probability of entrainment and grain size. One must first divide the mixture in into several classes M and assign a correspondent frequency p_M. For each class, a conditional PDF of the competent velocity {{f}{{up}}}({{u}p}|M) is obtained, from the PDFs of the parameters that intervene in the model for the entrainment of a single particle: [ {{u}p}/√{g(s-1){{di}}}={{Φ }u}( { {{C}k} },{{{φ}k}},ψ,{{u}p/{di}}{{{ν}(w)}} )) ] where { Ck } is a set of shape parameters that characterize the non-sphericity of the grain, { φk} is a set of angles that describe the orientation of particle axes and its positioning relatively to its neighbours, ψ is the skin friction angle of the particles, {{{u}p}{{d}i}}/{{{ν}(w)}} is a particle Reynolds number, di is the sieving diameter of the particle, g is the acceleration of gravity and {{Φ }u} is a general function. For the same class, the probability density function of the instantaneous turbulent velocities {{f}u}(u|M) can be obtained from judicious laboratory or field work. From these probability densities, the empirical conditional probability of entrainment of class M is [ P(E|M)=int-∞ +∞ {P(u>{{u}p}|M) {{f}{{up}}}({{u}p}|M)d{{u}p}} ] where P(u>{{u}p}|M)=int{{up}}+∞ {{{f}u}(u|M)du}. Employing a frequentist interpretation of probability, in an actual bed reach subjected to a succession of N (turbulent) flows, the above equation states that the fraction N P(E|M) is the number of flows in which the grains of class M are entrained. The joint probability of entrainment and class M is given by the product P(E|M){{p}M}. Hence, the channel scale empirical probability of entrainment is the marginal probability [ P(E)=sumlimitsM{P(E|M){{p}M}} ] since the classes M are mutually exclusive. Fractional bedload transport rates can be obtained from the probability of entrainment through [ {{q}s_M}={{E}M}{{ℓ }s_M} ] where {{q}s_M} is the bedload discharge in volume per unit width of size fraction M, {{E}M} is the entrainment rate per unit bed area of that size fraction, calculated from the probability of entrainment as {{E}M}=P(E|M){{p}M}(1-&lambda )d/(2T) where d is a characteristic diameter of grains on the bed surface, &lambda is the bed porosity, T is the integral length scale of the longitudinal velocity at the elevation of crests of the roughness elements and {{ℓ }s_M} is the mean displacement length of class M. Fractional transport rates were computed and compared with experimental data, determined from bedload samples collected in a 12 m long 40 cm wide channel under uniform flow conditions and sediment recirculation. The median diameter of the bulk bed mixture was 3.2 mm and the geometric standard deviation was 1.7. Shields parameters ranged from 0.027 and 0.067 while the boundary Reynolds number ranged between 220 and 376. Instantaneous velocities were measured with 2-component Laser Doppler Anemometry. The results of the probabilist model exhibit a general good agreement with the laboratory data. However the probability of entrainment of the smallest size fractions is systematically underestimated. This may be caused by phenomena that is absent from the model, for instance the increased magnitude of hydrodynamic actions following the displacement of a larger sheltering grain and the fact that the collective entrainment of smaller grains following one large turbulent event is not accounted for. This work was partially funded by FEDER, program COMPETE, and by national funds through Portuguese Foundation for Science and Technology (FCT) project RECI/ECM-HID/0371/2012.

  9. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  10. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  11. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  12. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  13. Applications of conformal field theory to problems in 2D percolation

    NASA Astrophysics Data System (ADS)

    Simmons, Jacob Joseph Harris

    This thesis explores critical two-dimensional percolation in bounded regions in the continuum limit. The main method which we employ is conformal field theory (CFT). Our specific results follow from the null-vector structure of the c = 0 CFT that applies to critical two-dimensional percolation. We also make use of the duality symmetry obeyed at the percolation point, and the fact that percolation may be understood as the q-state Potts model in the limit q → 1. Our first results describe the correlations between points in the bulk and boundary intervals or points, i.e. the probability that the various points or intervals are in the same percolation cluster. These quantities correspond to order-parameter profiles under the given conditions, or cluster connection probabilities. We consider two specific cases: an anchoring interval, and two anchoring points. We derive results for these and related geometries using the CFT null-vectors for the corresponding boundary condition changing (bcc) operators. In addition, we exhibit several exact relationships between these probabilities. These relations between the various bulk-boundary connection probabilities involve parameters of the CFT called operator product expansion (OPE) coefficients. We then compute several of these OPE coefficients, including those arising in our new probability relations. Beginning with the familiar CFT operator φ1,2, which corresponds to a free-fixed spin boundary change in the q-state Potts model, we then develop physical interpretations of the bcc operators. We argue that, when properly normalized, higher-order bcc operators correspond to successive fusions of multiple φ1,2, operators. Finally, by identifying the derivative of φ1,2 with the operator φ1,4, we derive several new quantities called first crossing densities. These new results are then combined and integrated to obtain the three previously known crossing quantities in a rectangle: the probability of a horizontal crossing cluster, the probability of a cluster crossing both horizontally and vertically, and the expected number of horizontal crossing clusters. These three results were known to be solutions to a certain fifth-order differential equation, but until now no physically meaningful explanation had appeared. This differential equation arises naturally in our derivation.

  14. [Demography and nesting ecology of green iguana, Iguana iguana (Squamata: Iguanidae), in 2 exploited populations in Depresión Momposina, Colombia].

    PubMed

    Muñoz, Eliana M; Ortega, Angela M; Bock, Brian C; Páez, Vivian P

    2003-03-01

    We studied the demography and nesting ecology of two populations of Iguana iguana that face heavy exploitation and habitat modification in the Momposina Depression, Colombia. Lineal transect data was analyzed using the Fourier model to provide estimates of social group densities, which was found to differ both within and among populations (1.05-6.0 groups/ha). Mean group size and overall iguana density estimates varied between populations as well (1.5-13.7 iguanas/ha). The density estimates were far lower than those reported from more protected areas in Panama and Venezuela. Iguana densities were consistently higher in sites located along rivers (2.5 iguanas/group) than in sites along the margin of marshes, probably due to vegetational differences (1.5 iguanas/group). There was no correlation between density estimates and estimates of relative abundance (number of iguanas seen/hour/person) due to differing detectabilities of iguana groups among sites. The adult sex ratio (1:2.5 males:females) agreed well with other reports in the literature based upon observation of adult social groups, and probably results from the polygynous mating system in this species rather than a real demographic skew. Nesting in this population occurs from the end of January through March and hatching occurs between April and May. We monitored 34 nests, which suffered little vertebrate predation, perhaps due to the lack of a complete vertebrate fauna in this densely inhabited area, but nests suffered from inundation, cattle trampling, and infestation by phorid fly larvae. Clutch sizes in these populations were lower than all other published reports except for the iguana population on the highly xeric island of Curaçao, implying that adult females in our area are unusually small. We argue that this is more likely the result of the exploitation of these populations rather than an adaptive response to environmentally extreme conditions.

  15. Laboratory-Tutorial Activities for Teaching Probability

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…

  16. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  17. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  18. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  19. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  20. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  1. Using areas of known occupancy to identify sources of variation in detection probability of raptors: taking time lowers replication effort for surveys.

    PubMed

    Murn, Campbell; Holloway, Graham J

    2016-10-01

    Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.

  2. A Case Series of the Probability Density and Cumulative Distribution of Laryngeal Disease in a Tertiary Care Voice Center.

    PubMed

    de la Fuente, Jaime; Garrett, C Gaelyn; Ossoff, Robert; Vinson, Kim; Francis, David O; Gelbard, Alexander

    2017-11-01

    To examine the distribution of clinic and operative pathology in a tertiary care laryngology practice. Probability density and cumulative distribution analyses (Pareto analysis) was used to rank order laryngeal conditions seen in an outpatient tertiary care laryngology practice and those requiring surgical intervention during a 3-year period. Among 3783 new clinic consultations and 1380 operative procedures, voice disorders were the most common primary diagnostic category seen in clinic (n = 3223), followed by airway (n = 374) and swallowing (n = 186) disorders. Within the voice strata, the most common primary ICD-9 code used was dysphonia (41%), followed by unilateral vocal fold paralysis (UVFP) (9%) and cough (7%). Among new voice patients, 45% were found to have a structural abnormality. The most common surgical indications were laryngotracheal stenosis (37%), followed by recurrent respiratory papillomatosis (18%) and UVFP (17%). Nearly 55% of patients presenting to a tertiary referral laryngology practice did not have an identifiable structural abnormality in the larynx on direct or indirect examination. The distribution of ICD-9 codes requiring surgical intervention was disparate from that seen in clinic. Application of the Pareto principle may improve resource allocation in laryngology, but these initial results require confirmation across multiple institutions.

  3. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  4. Photoelectron Spectroscopy and Density Functional Theory Studies of Iron Sulfur (FeS)m- (m = 2-8) Cluster Anions: Coexisting Multiple Spin States.

    PubMed

    Yin, Shi; Bernstein, Elliot R

    2017-10-05

    Iron sulfur cluster anions (FeS) m - (m = 2-8) are studied by photoelectron spectroscopy (PES) at 3.492 eV (355 nm) and 4.661 eV (266 nm) photon energies, and by density functional theory (DFT) calculations. The most probable structures and ground state spin multiplicities for (FeS) m - (m = 2-8) clusters are tentatively assigned through a comparison of their theoretical and experiment first vertical detachment energy (VDE) values. Many spin states lie within 0.5 eV of the ground spin state for the larger (FeS) m - (m ≥ 4) clusters. Theoretical VDEs of these low lying spin states are in good agreement with the experimental VDE values. Therefore, multiple spin states of each of these iron sulfur cluster anions probably coexist under the current experimental conditions. Such available multiple spin states must be considered when evaluating the properties and behavior of these iron sulfur clusters in real chemical and biological systems. The experimental first VDEs of (FeS) m - (m = 1-8) clusters are observed to change with the cluster size (number m). The first VDE trends noted can be related to the different properties of the highest singly occupied molecular orbitals (NBO, HSOMOs) of each cluster anion. The changing nature of the NBO/HSOMO of these (FeS) m - (m = 1-8) clusters from a p orbital on S, to a d orbital on Fe, and to an Fe-Fe bonding orbital is probably responsible for the observed increasing trend for their first VDEs with respect to m.

  5. Shallow slip amplification and enhanced tsunami hazard unravelled by dynamic simulations of mega-thrust earthquakes

    PubMed Central

    Murphy, S.; Scala, A.; Herrero, A.; Lorito, S.; Festa, G.; Trasatti, E.; Tonini, R.; Romano, F.; Molinari, I.; Nielsen, S.

    2016-01-01

    The 2011 Tohoku earthquake produced an unexpected large amount of shallow slip greatly contributing to the ensuing tsunami. How frequent are such events? How can they be efficiently modelled for tsunami hazard? Stochastic slip models, which can be computed rapidly, are used to explore the natural slip variability; however, they generally do not deal specifically with shallow slip features. We study the systematic depth-dependence of slip along a thrust fault with a number of 2D dynamic simulations using stochastic shear stress distributions and a geometry based on the cross section of the Tohoku fault. We obtain a probability density for the slip distribution, which varies both with depth, earthquake size and whether the rupture breaks the surface. We propose a method to modify stochastic slip distributions according to this dynamically-derived probability distribution. This method may be efficiently applied to produce large numbers of heterogeneous slip distributions for probabilistic tsunami hazard analysis. Using numerous M9 earthquake scenarios, we demonstrate that incorporating the dynamically-derived probability distribution does enhance the conditional probability of exceedance of maximum estimated tsunami wave heights along the Japanese coast. This technique for integrating dynamic features in stochastic models can be extended to any subduction zone and faulting style. PMID:27725733

  6. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  7. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  8. Optimum space shuttle launch times relative to natural environment

    NASA Technical Reports Server (NTRS)

    King, R. L.

    1977-01-01

    The probabilities of favorable and unfavorable weather conditions for launch and landing of the STS under different criteria were computed for every three hours on a yearly basis using 14 years of weather data. These temporal probability distributions were considered for three sets of weather criteria encompassing benign, moderate and severe weather conditions for both Kennedy Space Center and for Edwards Air Force Base. In addition, the conditional probabilities were computed for unfavorable weather conditions occurring after a delay which may or may not be due to weather conditions. Also for KSC, the probabilities of favorable landing conditions at various times after favorable launch conditions have prevailed. The probabilities were computed to indicate the significance of each weather element to the overall result.

  9. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  10. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  11. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  12. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  13. Intermittent electron density and temperature fluctuations and associated fluxes in the Alcator C-Mod scrape-off layer

    NASA Astrophysics Data System (ADS)

    Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.

    2018-06-01

    The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.

  14. Controlling the Shannon Entropy of Quantum Systems

    PubMed Central

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819

  15. Controlling the shannon entropy of quantum systems.

    PubMed

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.

  16. Probabilistic assessment of uncertain adaptive hybrid composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    Adaptive composite structures using actuation materials, such as piezoelectric fibers, were assessed probabilistically utilizing intraply hybrid composite mechanics in conjunction with probabilistic composite structural analysis. Uncertainties associated with the actuation material as well as the uncertainties in the regular (traditional) composite material properties were quantified and considered in the assessment. Static and buckling analyses were performed for rectangular panels with various boundary conditions and different control arrangements. The probability density functions of the structural behavior, such as maximum displacement and critical buckling load, were computationally simulated. The results of the assessment indicate that improved design and reliability can be achieved with actuation material.

  17. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  18. Zone clearance in an infinite TASEP with a step initial condition

    NASA Astrophysics Data System (ADS)

    Cividini, Julien; Appert-Rolland, Cécile

    2017-06-01

    The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.

  19. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  20. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  1. An Alternative Version of Conditional Probabilities and Bayes' Rule: An Application of Probability Logic

    ERIC Educational Resources Information Center

    Satake, Eiki; Amato, Philip P.

    2008-01-01

    This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…

  2. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  3. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  4. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  5. Dendritic brushes under theta and poor solvent conditions

    NASA Astrophysics Data System (ADS)

    Gergidis, Leonidas N.; Kalogirou, Andreas; Charalambopoulos, Antonios; Vlahos, Costas

    2013-07-01

    The effects of solvent quality on the internal stratification of polymer brushes formed by dendron polymers up to third generation were studied by means of molecular dynamics simulations with Langevin thermostat. The distributions of polymer units, of the free ends, the radii of gyration, and the back folding probabilities of the dendritic spacers were studied at the macroscopic states of theta and poor solvent. For high grafting densities we observed a small decrease in the height of the brush as the solvent quality decreases. The internal stratification in theta solvent was similar to the one we found in good solvent, with two and in some cases three kinds of populations containing short dendrons with weakly extended spacers, intermediate-height dendrons, and tall dendrons with highly stretched spacers. The differences increase as the grafting density decreases and single dendron populations were evident in theta and poor solvent. In poor solvent at low grafting densities, solvent micelles, polymeric pinned lamellae, spherical and single chain collapsed micelles were observed. The scaling dependence of the height of the dendritic brush at high density brushes for both solvents was found to be in agreement with existing analytical results.

  6. Epidemics in interconnected small-world networks.

    PubMed

    Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong

    2015-01-01

    Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.

  7. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  8. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  9. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  10. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  12. Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.

    2013-07-01

    Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.

  13. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  14. Probability density function of non-reactive solute concentration in heterogeneous porous formations

    Treesearch

    Alberto Bellin; Daniele Tonina

    2007-01-01

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...

  15. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    PubMed

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  16. Predictions of malaria vector distribution in Belize based on multispectral satellite data

    NASA Technical Reports Server (NTRS)

    Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.

    1996-01-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  17. Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation

    USGS Publications Warehouse

    Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.

    1998-01-01

    We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.

  18. Passive seismic monitoring of the Bering Glacier during its last surge event

    NASA Astrophysics Data System (ADS)

    Zhan, Z.

    2017-12-01

    The physical causes behind glacier surges are still unclear. Numerous evidences suggest that they probably involve changes in glacier basal conditions, such as switch of basal water system from concentrated large tunnels to a distributed "layer" as "connected cavities". However, most remote sensing approaches can not penetrate to the base to monitor such changes continuously. Here we apply seismic interferometry using ambient noise to monitor glacier seismic structures, especially to detect possible signatures of the hypothesized high-pressure water "layer". As an example, we derive an 11-year long history of seismic structure of the Bering Glacier, Alaska, covering its latest surge event. We observe substantial drops of Rayleigh and Love wavespeeds across the glacier during the surge event, potentially caused by changes in crevasse density, glacier thickness, and basal conditions.

  19. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    ERIC Educational Resources Information Center

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  20. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Hydrologic controls on basin-scale distribution of benthic macroinvertebrates

    NASA Astrophysics Data System (ADS)

    Bertuzzo, E.; Ceola, S.; Singer, G. A.; Battin, T. J.; Montanari, A.; Rinaldo, A.

    2013-12-01

    The presentation deals with the role of streamflow variability on basin-scale distributions of benthic macroinvertebrates. Specifically, we present a probabilistic analysis of the impacts of the variability along the river network of relevant hydraulic variables on the density of benthic macroinvertebrate species. The relevance of this work is based on the implications of the predictability of macroinvertebrate patterns within a catchment on fluvial ecosystem health, being macroinvertebrates commonly used as sensitive indicators, and on the effects of anthropogenic activity. The analytical tools presented here outline a novel procedure of general nature aiming at a spatially-explicit quantitative assessment of how near-bed flow variability affects benthic macroinvertebrate abundance. Moving from the analytical characterization of the at-a-site probability distribution functions (pdfs) of streamflow and bottom shear stress, a spatial extension to a whole river network is performed aiming at the definition of spatial maps of streamflow and bottom shear stress. Then, bottom shear stress pdf, coupled with habitat suitability curves (e.g., empirical relations between species density and bottom shear stress) derived from field studies are used to produce maps of macroinvertebrate suitability to shear stress conditions. Thus, moving from measured hydrologic conditions, possible effects of river streamflow alterations on macroinvertebrate densities may be fairly assessed. We apply this framework to an Austrian river network, used as benchmark for the analysis, for which rainfall and streamflow time-series and river network hydraulic properties and macroinvertebrate density data are available. A comparison between observed vs "modeled" species' density in three locations along the examined river network is also presented. Although the proposed approach focuses on a single controlling factor, it shows important implications with water resources management and fluvial ecosystem protection.

  2. Near Real Time Tools for ISS Plasma Science and Engineering Applications

    NASA Astrophysics Data System (ADS)

    Minow, J. I.; Willis, E. M.; Parker, L. N.; Shim, J.; Kuznetsova, M. M.; Pulkkinen, A. A.

    2013-12-01

    The International Space Station (ISS) program utilizes a plasma environment forecast for estimating electrical charging hazards for crews during extravehicular activity (EVA). The process uses ionospheric electron density and temperature measurements from the ISS Floating Potential Measurement Unit (FPMU) instrument suite with the assumption that the plasma conditions will remain constant for one to fourteen days with a low probability for a space weather event which would significantly change the environment before an EVA. FPMU data is typically not available during EVA's, therefore, the most recent FPMU data available for characterizing the state of the ionosphere during EVA is typically a day or two before the start of an EVA or after the EVA has been completed. In addition to EVA support, information on ionospheric plasma densities is often needed for support of ISS science payloads and anomaly investigations during periods when the FPMU is not operating. This presentation describes the application of space weather tools developed by MSFC using data from near real time satellite radio occultation and ground based ionosonde measurements of ionospheric electron density and a first principle ionosphere model providing electron density and temperature run in a real time mode by GSFC. These applications are used to characterize the space environment during EVA periods when FPMU data is not available, monitor for large charges in ionosphere density that could render the ionosphere forecast and plasma hazard assessment invalid, and validate the assumption of 'persistence of conditions' used in deriving the hazard forecast. In addition, the tools are used to provide space environment input to science payloads on ISS and anomaly investigations during periods the FPMU is not operating.

  3. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  4. Uncertainty analysis for the steady-state flows in a dual throat nozzle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q.-Y.; Gottlieb, David; Hesthaven, Jan S.

    2005-03-20

    It is well known that the steady state of an isentropic flow in a dual-throat nozzle with equal throat areas is not unique. In particular there is a possibility that the flow contains a shock wave, whose location is determined solely by the initial condition. In this paper, we consider cases with uncertainty in this initial condition and use generalized polynomial chaos methods to study the steady-state solutions for stochastic initial conditions. Special interest is given to the statistics of the shock location. The polynomial chaos (PC) expansion modes are shown to be smooth functions of the spatial variable x,more » although each solution realization is discontinuous in the spatial variable x. When the variance of the initial condition is small, the probability density function of the shock location is computed with high accuracy. Otherwise, many terms are needed in the PC expansion to produce reasonable results due to the slow convergence of the PC expansion, caused by non-smoothness in random space.« less

  5. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  6. In vivo NMR imaging of sodium-23 in the human head.

    PubMed

    Hilal, S K; Maudsley, A A; Ra, J B; Simon, H E; Roschmann, P; Wittekoek, S; Cho, Z H; Mun, S K

    1985-01-01

    We report the first clinical nuclear magnetic resonance (NMR) images of cerebral sodium distribution in normal volunteers and in patients with a variety of pathological lesions. We have used a 1.5 T NMR magnet system. When compared with proton distribution, sodium shows a greater variation in its concentration from tissue to tissue and from normal to pathological conditions. Image contrast calculated on the basis of sodium concentration is 7 to 18 times greater than that of proton spin density. Normal images emphasize the extracellular compartments. In the clinical studies, areas of recent or old cerebral infarction and tumors show a pronounced increase of sodium content (300-400%). Actual measurements of image density values indicate that there is probably a further accentuation of the contrast by the increased "NMR visibility" of sodium in infarcted tissue. Sodium imaging may prove to be a more sensitive means for early detection of some brain disorders than other imaging methods.

  7. Experimental demonstration of an Allee effect in microbial populations.

    PubMed

    Kaul, RajReni B; Kramer, Andrew M; Dobbs, Fred C; Drake, John M

    2016-04-01

    Microbial populations can be dispersal limited. However, microorganisms that successfully disperse into physiologically ideal environments are not guaranteed to establish. This observation contradicts the Baas-Becking tenet: 'Everything is everywhere, but the environment selects'. Allee effects, which manifest in the relationship between initial population density and probability of establishment, could explain this observation. Here, we experimentally demonstrate that small populations of Vibrio fischeri are subject to an intrinsic demographic Allee effect. Populations subjected to predation by the bacterivore Cafeteria roenbergensis display both intrinsic and extrinsic demographic Allee effects. The estimated critical threshold required to escape positive density-dependence is around 5, 20 or 90 cells ml(-1)under conditions of high carbon resources, low carbon resources or low carbon resources with predation, respectively. This work builds on the foundations of modern microbial ecology, demonstrating that mechanisms controlling macroorganisms apply to microorganisms, and provides a statistical method to detect Allee effects in data. © 2016 The Author(s).

  8. Experimental demonstration of an Allee effect in microbial populations

    PubMed Central

    Kramer, Andrew M.; Dobbs, Fred C.; Drake, John M.

    2016-01-01

    Microbial populations can be dispersal limited. However, microorganisms that successfully disperse into physiologically ideal environments are not guaranteed to establish. This observation contradicts the Baas-Becking tenet: ‘Everything is everywhere, but the environment selects’. Allee effects, which manifest in the relationship between initial population density and probability of establishment, could explain this observation. Here, we experimentally demonstrate that small populations of Vibrio fischeri are subject to an intrinsic demographic Allee effect. Populations subjected to predation by the bacterivore Cafeteria roenbergensis display both intrinsic and extrinsic demographic Allee effects. The estimated critical threshold required to escape positive density-dependence is around 5, 20 or 90 cells ml−1 under conditions of high carbon resources, low carbon resources or low carbon resources with predation, respectively. This work builds on the foundations of modern microbial ecology, demonstrating that mechanisms controlling macroorganisms apply to microorganisms, and provides a statistical method to detect Allee effects in data. PMID:27048467

  9. Divergence of perturbation theory in large scale structures

    NASA Astrophysics Data System (ADS)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  10. The physics and chemistry of the L134N molecular core

    NASA Technical Reports Server (NTRS)

    Swade, Daryl A.

    1989-01-01

    The dark cloud L134N is studied in detail via millimeter- and centimeter-wavelength emission-line spectra. A high-density core of molecular gas exists in L134N which has a kinetic temperature of about 12 K, a peak molecular hydrogen density of about 10 exp 4.5/cu cm, and a mass of about 23 solar. The core may be the site of future star formation. Maps of emission from (C-18)O, CS, H(C-13)O(+), SO, NH3, and C3H2 reveal morphologically different distributions resulting in part from both varying physical conditions within the cloud and optical depth effects. Significant differences also exist which are probably due to chemical abundance variations. A consistent set of LTE chemical abundances has been estimated at as many as seven positions, which can be used to constrain chemical models of dark clouds.

  11. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  12. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  13. MAVEN in situ measurements of photochemical escape of oxygen from Mars

    NASA Astrophysics Data System (ADS)

    Lillis, Robert; Deighan, Justin; Fox, Jane; Bougher, Stephen; Lee, Yuni; Cravens, Thomas; Rahmati, Ali; Mahaffy, Paul; Benna, Mehdi; Groller, Hannes; Jakosky, Bruce

    2016-04-01

    One of the primary goals of the MAVEN mission is to characterize rates of atmospheric escape from Mars at the present epoch and relate those escape rates to solar drivers. One of the known escape processes is photochemical escape, where a) an exothermic chemical reaction in the atmosphere results in an upward-traveling neutral particle whose velocity exceeds planetary escape velocity and b) the particle is not prevented from escaping through subsequent collisions. At Mars, photochemical escape of oxygen is expected to be a significant channel for atmospheric escape, particularly in the early solar system when extreme ultraviolet (EUV) fluxes were much higher. Thus characterizing this escape process and its variability with solar drivers is central to understanding the role escape to space has played in Mars' climate evolution. We use near-periapsis (<400 km altitude) data from three MAVEN instruments: the Langmuir Probe and Waves (LPW) instrument measures electron density and temperature, the Suprathermal And Thermal Ion Composition (STATIC) experiment measures ion temperature and the Neutral Gas and Ion Mass Spectrometer (NGIMS) measures neutral and ion densities. For each profile of in situ measurements, we make several calculations, each as a function of altitude. The first uses electron and temperatures and simulates the dissociative recombination of both O2+ and CO2+ to calculate the probability distribution for the initial energies of the resulting hot oxygen atoms. The second is a Monte Carlo hot atom transport model that takes that distribution of initial O energies and the measured neutral density profiles and calculates the probability that a hot atom born at that altitude will escape. The third takes the measured electron and ion densities and electron temperatures and calculates the production rate of hot O atoms. We then multiply together the profiles of hot atom production and escape probability to get profiles of the production rate of escaping atoms. We integrate with respect to altitude to give us the escape flux of hot oxygen atoms for that periapsis pass. We have sufficient coverage in solar zenith angle (SZA) to estimate total escape rates for two intervals with the obvious assumption that escape rates are the same at all points with the same SZA. We estimate total escape rates of 3.5-5.8 x 1025 s-1 for Ls = 289° to 319° and 1.6-2.6 x 1025 s-1 for Ls = 326° to 348°. The latter is the most directly comparable to previous model-based estimates and is roughly in line with several of them. Total photochemical loss over Mars history is not very useful to calculate from such escape fluxes derived over a limited area and under limited conditions. A thicker atmosphere and much higher solar EUV in the past may change the dynamics of escape dramatically. In the future, we intend to use 3-D Monte Carlo models of global atmospheric escape, in concert with our in situ and remote measurements, to fully characterize photochemical escape under current conditions and carefully extrapolate back in time using further simulations with new boundary conditions.

  14. Towards Determining the Optimal Density of Groundwater Observation Networks under Uncertainty

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Kokosi, Angeliki; Mamounakis, Georgios

    2016-04-01

    Time series of groundwater level constitute one of the main sources of information when studying the availability of ground water reserves, at a regional level, under changing climatic conditions. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the structure of the aquifer, and in particular by the spatial distribution of hydraulic conductivity (i.e. layering), dependencies in the transition rates between different geologic formations, juxtapositional tendencies, etc. In this work, we: 1) use the concept of transition probabilities embedded in a Markov chain setting to conditionally simulate synthetic aquifer structures representative of geologic formations commonly found in the literature (see e.g. Hoeksema and Kitanidis, 1985), and 2) study how the density of observation wells affects the estimation accuracy of hydraulic heads at unobserved locations. The obtained results are promising, pointing towards the direction of establishing design criteria based on the statistical structure of the aquifer, such as the level of dependence in the transition rates of observed lithologies. Reference: Hoeksema, R.J. and P.K. Kitanidis (1985) Analysis of spatial structure of properties of selected aquifers, Water Resources Research, 21(4), 563-572. Acknowledgments: This work is sponsored by the Onassis Foundation under the "Special Grant and Support Program for Scholars' Association Members".

  15. Distribution and Feeding Behavior of Omorgus suberosus (Coleoptera: Trogidae) in Lepidochelys olivacea Turtle Nests

    PubMed Central

    Baena, Martha L.; Escobar, Federico; Halffter, Gonzalo; García–Chávez, Juan H.

    2015-01-01

    Omorgus suberosus (Fabricius, 1775) has been identified as a potential predator of the eggs of the turtle Lepidochelys olivacea (Eschscholtz, 1829) on one of the main turtle nesting beaches in the world, La Escobilla in Oaxaca, Mexico. This study presents an analysis of the spatio–temporal distribution of the beetle on this beach (in areas of high and low density of L. olivacea nests over two arrival seasons) and an evaluation, under laboratory conditions, of the probability of damage to the turtle eggs by this beetle. O. suberosus adults and larvae exhibited an aggregated pattern at both turtle nest densities; however, aggregation was greater in areas of low nest density, where we found the highest proportion of damaged eggs. Also, there were fluctuations in the temporal distribution of the adult beetles following the arrival of the turtles on the beach. Under laboratory conditions, the beetles quickly damaged both dead eggs and a mixture of live and dead eggs, but were found to consume live eggs more slowly. This suggests that O. suberosus may be recycling organic material; however, its consumption of live eggs may be sufficient in some cases to interrupt the incubation period of the turtle. We intend to apply these results when making decisions regarding the L. olivacea nests on La Escobilla Beach, one of the most important sites for the conservation of this species. PMID:26422148

  16. Attenuated associations between increasing BMI and unfavorable lipid profiles in Chinese Buddhist vegetarians.

    PubMed

    Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun

    2013-01-01

    Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.

  17. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  18. A partial differential equation for pseudocontact shift.

    PubMed

    Charnock, G T P; Kuprov, Ilya

    2014-10-07

    It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.

  19. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  20. Monte-Carlo computation of turbulent premixed methane/air ignition

    NASA Astrophysics Data System (ADS)

    Carmen, Christina Lieselotte

    The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.

  1. Making inference from wildlife collision data: inferring predator absence from prey strikes

    PubMed Central

    Hosack, Geoffrey R.; Barry, Simon C.

    2017-01-01

    Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application. PMID:28243534

  2. The Probability Approach to English If-Conditional Sentences

    ERIC Educational Resources Information Center

    Wu, Mei

    2012-01-01

    Users of the Probability Approach choose the right one from four basic types of conditional sentences--factual, predictive, hypothetical and counterfactual conditionals, by judging how likely (i.e. the probability) the event in the result-clause will take place when the condition in the if-clause is met. Thirty-three students from the experimental…

  3. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  4. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  5. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  6. Synoptic observations of Jupiter's radio emissions: Average Statistical properties observed by Voyager

    NASA Technical Reports Server (NTRS)

    Alexander, J. K.; Carr, T. D.; Thieman, J. R.; Schauble, J. J.; Riddle, A. C.

    1980-01-01

    Observations of Jupiter's low frequency radio emissions collected over one month intervals before and after each Voyager encounter were analyzed. Compilations of occurrence probability, average power flux density and average sense of circular polarization are presented as a function of central meridian longitude, phase of Io, and frequency. The results are compared with ground based observations. The necessary geometrical conditions are preferred polarization sense for Io-related decametric emission observed by Voyager from above both the dayside and nightside hemispheres are found to be essentially the same as are observed in Earth based studies. On the other hand, there is a clear local time dependence in the Io-independent decametric emission. Io appears to have an influence on average flux density of the emission down to below 2 MHz. The average power flux density spectrum of Jupiter's emission has a broad peak near 9MHz. Integration of the average spectrum over all frequencies gives a total radiated power for an isotropic source of 4 x 10 to the 11th power W.

  7. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. [Are simple time lags responsible for cyclic variation of population density? : A comparison of laboratory population dynamics of Brachionus calyciflorus pallas (rotatoria) with computer simulations].

    PubMed

    Halbach, Udo; Burkhardt, Heinz Jürgen

    1972-09-01

    Laboratory populations of the rotifer Brachionus calyciflorus were cultured at different temperatures (25, 20, 15°C) but otherwise at constant conditions. The population densities showed relatively constant oscillations (Figs. 1 to 3A-C). Amplitudes and frequencies of the oscillations were positively correlated with temperature (Table 1). A test was made, whether the logistic growth function with simple time lag is able to describe the population curves. There are strong similarities between the simulations (Figs. 1-3E) and the real population dynamics if minor adjustments of the empirically determined parameters are made. There-fore it is suggested that time lags are responsible for the observed oscillations. However, the actual time lags probably do not act in the simple manner of the model, because birth and death rates react with different time lags, and both parameters are dependent on individual age and population density. A more complex model, which incorporates these modifications, should lead to a more realistic description of the observed oscillations.

  9. The role of demographic compensation theory in incidental take assessments for endangered species

    USGS Publications Warehouse

    McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts

    2011-01-01

    Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.

  10. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  11. Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.

  12. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  13. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  14. Information Density and Syntactic Repetition.

    PubMed

    Temperley, David; Gildea, Daniel

    2015-11-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.

  15. Individual quality and age but not environmental or social conditions modulate costs of reproduction in a capital breeder.

    PubMed

    Debeffe, Lucie; Poissant, Jocelyn; McLoughlin, Philip D

    2017-08-01

    Costs associated with reproduction are widely known to play a role in the evolution of reproductive tactics with consequences to population and eco-evolutionary dynamics. Evaluating these costs as they pertain to species in the wild remains an important goal of evolutionary ecology. Individual heterogeneity, including differences in individual quality (i.e., among-individual differences in traits associated with survival and reproduction) or state, and variation in environmental and social conditions can modulate the costs of reproduction; however, few studies have considered effects of these factors simultaneously. Taking advantage of a detailed, long-term dataset for a population of feral horses (Sable Island, Nova Scotia, Canada), we address the question of how intrinsic (quality, age), environmental (winter severity, location), and social conditions (group size, composition, sex ratio, density) influence the costs of reproduction on subsequent reproduction. Individual quality was measured using a multivariate analysis on a combination of four static and dynamic traits expected to depict heterogeneity in individual performance. Female quality and age interacted with reproductive status of the previous year to determine current reproductive effort, while no effect of social or environmental covariates was found. High-quality females showed higher probabilities of giving birth and weaning their foal regardless of their reproductive status the previous year, while those of lower quality showed lower probabilities of producing foals in successive years. Middle-aged (prime) females had the highest probability of giving birth when they had not reproduced the year before, but no such relationship with age was found among females that had reproduced the previous year, indicating that prime-aged females bear higher costs of reproduction. We show that individual quality and age were key factors modulating the costs of reproduction in a capital breeder but that environmental or social conditions were not, highlighting the importance of considering multiple factors when studying costs of reproduction.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastner, S.O.; Bhatia, A.K.

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less

  17. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  18. Action Potential Waveform Variability Limits Multi-Unit Separation in Freely Behaving Rats

    PubMed Central

    Stratton, Peter; Cheung, Allen; Wiles, Janet; Kiyatkin, Eugene; Sah, Pankaj; Windels, François

    2012-01-01

    Extracellular multi-unit recording is a widely used technique to study spontaneous and evoked neuronal activity in awake behaving animals. These recordings are done using either single-wire or mulitwire electrodes such as tetrodes. In this study we have tested the ability of single-wire electrodes to discriminate activity from multiple neurons under conditions of varying noise and neuronal cell density. Using extracellular single-unit recording, coupled with iontophoresis to drive cell activity across a wide dynamic range, we studied spike waveform variability, and explored systematic differences in single-unit spike waveform within and between brain regions as well as the influence of signal-to-noise ratio (SNR) on the similarity of spike waveforms. We also modelled spike misclassification for a range of cell densities based on neuronal recordings obtained at different SNRs. Modelling predictions were confirmed by classifying spike waveforms from multiple cells with various SNRs using a leading commercial spike-sorting system. Our results show that for single-wire recordings, multiple units can only be reliably distinguished under conditions of high recording SNR (≥4) and low neuronal density (≈20,000/ mm3). Physiological and behavioural changes, as well as technical limitations typical of awake animal preparations, reduce the accuracy of single-channel spike classification, resulting in serious classification errors. For SNR <4, the probability of misclassifying spikes approaches 100% in many cases. Our results suggest that in studies where the SNR is low or neuronal density is high, separation of distinct units needs to be evaluated with great caution. PMID:22719894

  19. Going through a quantum phase

    NASA Technical Reports Server (NTRS)

    Shapiro, Jeffrey H.

    1992-01-01

    Phase measurements on a single-mode radiation field are examined from a system-theoretic viewpoint. Quantum estimation theory is used to establish the primacy of the Susskind-Glogower (SG) phase operator; its phase eigenkets generate the probability operator measure (POM) for maximum likelihood phase estimation. A commuting observables description for the SG-POM on a signal x apparatus state space is derived. It is analogous to the signal-band x image-band formulation for optical heterodyne detection. Because heterodyning realizes the annihilation operator POM, this analogy may help realize the SG-POM. The wave function representation associated with the SG POM is then used to prove the duality between the phase measurement and the number operator measurement, from which a number-phase uncertainty principle is obtained, via Fourier theory, without recourse to linearization. Fourier theory is also employed to establish the principle of number-ket causality, leading to a Paley-Wiener condition that must be satisfied by the phase-measurement probability density function (PDF) for a single-mode field in an arbitrary quantum state. Finally, a two-mode phase measurement is shown to afford phase-conjugate quantum communication at zero error probability with finite average photon number. Application of this construct to interferometric precision measurements is briefly discussed.

  20. Some thoughts on Mercurian resources

    NASA Astrophysics Data System (ADS)

    Gillett, Stephen L.

    Virtually all scenarios on Solar System development ignore Mercury, but such inattention is probably undeserved. Once viable lunar and (probably) asteroidal facilities are established in the next century, Mercury warrants further investigation. Mercury's high solar energy density is a major potential advantage for space-based industries. Indeed, despite its higher gravity, Mercury is roughly twice as easy to leave as the Moon if the additional solar flux is taken into account. Moreover, with solar-driven technologies such as solar sails or electric propulsion, its depth in the Sun's gravity well is less important. Because Mercury is airless and almost certainly waterless, it will be an obvious place to export lunar technology, which will have been developed to deal with very similar conditions. Methods for extracting resources from anhydrous silicates will be particularly germane. Even without solar-powered propulsion, the discovery of low-delta-V access via multiple Venus and Earth encounters makes the planet easier to reach than had been thought. Technology developed for multi-year missions to asteroids and Mars should be readily adaptable to such Mercurian missions. Mercury will not be our first outpost in the Solar System. Nonetheless, as facilities are established in cis-Earth space, it probably merits attention as a next step for development.

  1. Turbulence and the Stabilization Principle

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Further results of research, reported in several previous NASA Tech Briefs articles, were obtained on a mathematical formalism for postinstability motions of a dynamical system characterized by exponential divergences of trajectories leading to chaos (including turbulence). To recapitulate: Fictitious control forces are introduced to couple the dynamical equations with a Liouville equation that describes the evolution of the probability density of errors in initial conditions. These forces create a powerful terminal attractor in probability space that corresponds to occurrence of a target trajectory with probability one. The effect in ordinary perceived three-dimensional space is to suppress exponential divergences of neighboring trajectories without affecting the target trajectory. Con sequently, the postinstability motion is represented by a set of functions describing the evolution of such statistical quantities as expectations and higher moments, and this representation is stable. The previously reported findings are analyzed from the perspective of the authors Stabilization Principle, according to which (1) stability is recognized as an attribute of mathematical formalism rather than of underlying physics and (2) a dynamical system that appears unstable when modeled by differentiable functions only can be rendered stable by modifying the dynamical equations to incorporate intrinsic stochasticity.

  2. Trackline and point detection probabilities for acoustic surveys of Cuvier's and Blainville's beaked whales.

    PubMed

    Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha

    2013-09-01

    Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.

  3. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  4. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  5. The risks and returns of stock investment in a financial market

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-03-01

    The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.

  6. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  7. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  8. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  9. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  10. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  11. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  12. Spatial and temporal Brook Trout density dynamics: Implications for conservation, management, and monitoring

    USGS Publications Warehouse

    Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,

    2014-01-01

    Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.

  13. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  14. Vocalization behavior and response of black rails

    USGS Publications Warehouse

    Legare, M.L.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis) (n = 43, 26 males, 17 females) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used regression coefficients from logistic regression equations to model the probability of a response conditional to the birds' sex, nesting status, distance to playback source, and the time of survey. With a probability of 0.811, non-nesting male black rails were most likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). Linear regression was used to determine daily, monthly, and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the linear regression model were month (F = 3.89, df = 3, p = 0.0140), year (F = 9.37, df = 2, p = 0.0003), temperature (F = 5.44, df=1, p = 0.0236), and month*year (F = 2.69, df = 5, p = 0.0311). The model was highly significant (p < 0.0001) and explained 53% of the variation of mean response per survey period (R2 = 0.5353). Response probability data obtained from the radio-tagged black rails and data from the weekly playback survey route were combined to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. Density estimates for black rails may be obtained from playback surveys, and fixed radius circular plots. Circular plots should be considered as having a radius of 80 m and be located so the plot centers are 150 m apart. Playback tapes should contain one series of Kic-kic-kerr and Growl vocalizations recorded within the same geographic region as the study area. Surveys should be conducted from 0-2 hours after sunrise or 0-2 hours before sunset, during the pre-nesting season, and when wind velocity is < 20 kph. Observers should listen for 3-4 minutes after playing the survey tape and record responses heard during that time. Observers should be trained to identify black rail vocalizations and should have acceptable hearing ability. Given the number of variables that may have large effects on the response behavior of black rails to tape playback, we recommend that future studies using playback surveys should be cautious when presenting estimates of 'absolute' density. Though results did account for variation in response behavior, we believe that additional variation in vocal response between sites, with breeding status, and bird density remains in question. Playback surveys along fixed routes providing a simple index of abundance would be useful to monitor populations over large geographic areas, and over time. Considering the limitations of most agency resources for webless waterbirds, index surveys may be more appropriate. Future telemetry studies of this type on other species and at other sites would be useful to calibrate information obtained from playback surveys whether reporting an index of abundance or density estimate.

  15. An investigation of student understanding of classical ideas related to quantum mechanics: Potential energy diagrams and spatial probability density

    NASA Astrophysics Data System (ADS)

    Stephanik, Brian Michael

    This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.

  16. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  17. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  18. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy.

    PubMed

    Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.

  19. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy

    PubMed Central

    Cornforth, David J.;  Tarvainen, Mika P.; Jelinek, Herbert F.

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311

  20. Derived distribution of floods based on the concept of partial area coverage with a climatic appeal

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro

    2000-02-01

    A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.

  1. On the role of electronic friction for dissociative adsorption and scattering of hydrogen molecules at a Ru(0001) surface.

    PubMed

    Füchsel, Gernot; Schimka, Selina; Saalfrank, Peter

    2013-09-12

    The role of electronic friction and, more generally, of nonadiabatic effects during dynamical processes at the gas/metal surface interface is still a matter of discussion. In particular, it is not clear if electronic nonadiabaticity has an effect under "mild" conditions, when molecules in low rovibrational states interact with a metal surface. In this paper, we investigate the role of electronic friction on the dissociative sticking and (inelastic) scattering of vibrationally and rotationally cold H2 molecules at a Ru(0001) surface theoretically. For this purpose, classical molecular dynamics with electronic friction (MDEF) calculations are performed and compared to MD simulations without friction. The two H atoms move on a six-dimensional potential energy surface generated from gradient-corrected density functional theory (DFT), that is, all molecular degrees of freedom are accounted for. Electronic friction is included via atomic friction coefficients obtained from an embedded atom, free electron gas (FEG) model, with embedding densities taken from gradient-corrected DFT. We find that within this model, dissociative sticking probabilities as a function of impact kinetic energies and impact angles are hardly affected by nonadiabatic effects. If one accounts for a possibly enhanced electronic friction near the dissociation barrier, on the other hand, reduced sticking probabilities are observed, in particular, at high impact energies. Further, there is always an influence on inelastic scattering, in particular, as far as the translational and internal energy distribution of the reflected molecules is concerned. Additionally, our results shed light on the role played by the velocity distribution of the incident molecular beam for adsorption probabilities, where, in particular, at higher impact energies, large effects are found.

  2. Kinetic effects in InP nanowire growth and stacking fault formation: the role of interface roughening.

    PubMed

    Chiaramonte, Thalita; Tizei, Luiz H G; Ugarte, Daniel; Cotta, Mônica A

    2011-05-11

    InP nanowire polytypic growth was thoroughly studied using electron microscopy techniques as a function of the In precursor flow. The dominant InP crystal structure is wurtzite, and growth parameters determine the density of stacking faults (SF) and zinc blende segments along the nanowires (NWs). Our results show that SF formation in InP NWs cannot be univocally attributed to the droplet supersaturation, if we assume this variable to be proportional to the ex situ In atomic concentration at the catalyst particle. An imbalance between this concentration and the axial growth rate was detected for growth conditions associated with larger SF densities along the NWs, suggesting a different route of precursor incorporation at the triple phase line in that case. The formation of SFs can be further enhanced by varying the In supply during growth and is suppressed for small diameter NWs grown under the same conditions. We attribute the observed behaviors to kinetically driven roughening of the semiconductor/metal interface. The consequent deformation of the triple phase line increases the probability of a phase change at the growth interface in an effort to reach local minima of system interface and surface energy.

  3. Predicting a quaternary tungsten oxide for sustainable photovoltaic application by density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, Pranab; Huda, Muhammad N., E-mail: huda@uta.edu; Al-Jassim, Mowafak M.

    2015-12-07

    A quaternary oxide, CuSnW{sub 2}O{sub 8} (CTTO), has been predicted by density functional theory (DFT) to be a suitable material for sustainable photovoltaic applications. CTTO possesses band gaps of 1.25 eV (indirect) and 1.37 eV (direct), which were evaluated using the hybrid functional (HSE06) as a post-DFT method. The hole mobility of CTTO was higher than that of silicon. Further, optical absorption calculations demonstrate that CTTO is a better absorber of sunlight than Cu{sub 2}ZnSnS{sub 4} and CuIn{sub x}Ga{sub 1−x}Se{sub 2} (x = 0.5). In addition, CTTO exhibits rigorous thermodynamic stability comparable to WO{sub 3}, as investigated by different thermodynamic approaches such as bondingmore » cohesion, fragmentation tendency, and chemical potential analysis. Chemical potential analysis further revealed that CTTO can be synthesized at flexible experimental growth conditions, although the co-existence of at least one secondary phase is likely. Finally, like other Cu-based compounds, the formation of Cu vacancies is highly probable, even at Cu-rich growth condition, which could introduce p-type activity in CTTO.« less

  4. Vulnerability of reactive skin to electric current perception--a pilot study implicating mast cells and the lymphatic microvasculature.

    PubMed

    Quatresooz, Pascale; Piérard-Franchimont, Claudine; Piérard, Gérald E

    2009-09-01

    Sensitive/reactive skin is regarded as a manifestation of sensory irritation. This susceptibility condition to various exogenous factors suggests the intervention of some neuropeptides and other neurobiological mediators. Mast cells are among the putative implicated cells. The present immunohistochemical and morphometric study was performed on two groups of 36 gender- and age-matched subjects complaining or not from reactive skin as determined by electric current perception. In the mid upper part of the dermis, the numerical density in mast cells and the size of the microvasculature were assessed distinguishing the blood and lymphatic vessels. Globally, the distributions of data were large in reactive skin. This condition was characterized by a prominent increase in both the numerical density in mast cells and the overall size of the lymphatics. By contrast, no difference was found in the size of cutaneous blood vessels. More precisely, it appeared that a subgroup of people with reactive skin exhibited these changes contrasting with some other individuals whose data remained close to the normal range. Mast cells and lymphatics are probably involved in the process of sensory irritation affecting a subgroup of the population.

  5. Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.

    PubMed

    Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor

    2015-11-01

    Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Microdosimetric Analysis Confirms Similar Biological Effectiveness of External Exposure to Gamma-Rays and Internal Exposure to 137Cs, 134Cs, and 131I

    PubMed Central

    Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki

    2014-01-01

    The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099

  7. The estimation of tree posterior probabilities using conditional clade probability distributions.

    PubMed

    Larget, Bret

    2013-07-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.

  8. Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2002-01-01

    A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...

  9. Properties of Traffic Risk Coefficient

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu

    2009-10-01

    We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.

  10. Probability mass first flush evaluation for combined sewer discharges.

    PubMed

    Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong

    2010-01-01

    The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.

  11. Comparison of sticking probabilities of metal atoms in magnetron sputtering deposition of CuZnSnS films

    NASA Astrophysics Data System (ADS)

    Sasaki, K.; Kikuchi, S.

    2014-10-01

    In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.

  12. Statistical analysis of dislocations and dislocation boundaries from EBSD data.

    PubMed

    Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N

    2017-08-01

    Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation

    NASA Technical Reports Server (NTRS)

    Steinwolf, Alexander; Rizzi, Stephen A.

    2003-01-01

    The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.

  14. Integrated stationary Ornstein-Uhlenbeck process, and double integral processes

    NASA Astrophysics Data System (ADS)

    Abundo, Mario; Pirozzi, Enrica

    2018-03-01

    We find a representation of the integral of the stationary Ornstein-Uhlenbeck (ISOU) process in terms of Brownian motion Bt; moreover, we show that, under certain conditions on the functions f and g , the double integral process (DIP) D(t) = ∫βt g(s) (∫αs f(u) dBu) ds can be thought as the integral of a suitable Gauss-Markov process. Some theoretical and application details are given, among them we provide a simulation formula based on that representation by which sample paths, probability densities and first passage times of the ISOU process are obtained; the first-passage times of the DIP are also studied.

  15. The isolation limits of stochastic vibration

    NASA Technical Reports Server (NTRS)

    Knopse, C. R.; Allaire, P. E.

    1993-01-01

    The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.

  16. Estimate of radiation damage to low-level electronics of the RF system in the LHC cavities arising from beam gas collisions.

    PubMed

    Butterworth, A; Ferrari, A; Tsoulou, E; Vlachoudis, V; Wijnands, T

    2005-01-01

    Monte Carlo simulations have been performed to estimate the radiation damage induced by high-energy hadrons in the digital electronics of the RF low-level systems in the LHC cavities. High-energy hadrons are generated when the proton beams interact with the residual gas. The contributions from various elements-vacuum chambers, cryogenic cavities, wideband pickups and cryomodule beam tubes-have been considered individually, with each contribution depending on the gas composition and density. The probability of displacement damage and single event effects (mainly single event upsets) is derived for the LHC start-up conditions.

  17. EFFECTS OF LASER RADIATION ON MATTER. LASER PLASMA: Feasibility of investigation of optical breakdown statistics using multifrequency lasers

    NASA Astrophysics Data System (ADS)

    Ulanov, S. F.

    1990-06-01

    A method proposed for investigating the statistics of bulk optical breakdown relies on multifrequency lasers, which eliminates the influence of the laser radiation intensity statistics. The method is based on preliminary recording of the peak intensity statistics of multifrequency laser radiation pulses at the caustic using the optical breakdown threshold of K8 glass. The probability density distribution function was obtained at the focus for the peak intensities of the radiation pulses of a multifrequency laser. This method may be used to study the self-interaction under conditions of bulk optical breakdown of transparent dielectrics.

  18. On the origins of approximations for stochastic chemical kinetics.

    PubMed

    Haseltine, Eric L; Rawlings, James B

    2005-10-22

    This paper considers the derivation of approximations for stochastic chemical kinetics governed by the discrete master equation. Here, the concepts of (1) partitioning on the basis of fast and slow reactions as opposed to fast and slow species and (2) conditional probability densities are used to derive approximate, partitioned master equations, which are Markovian in nature, from the original master equation. Under different conditions dictated by relaxation time arguments, such approximations give rise to both the equilibrium and hybrid (deterministic or Langevin equations coupled with discrete stochastic simulation) approximations previously reported. In addition, the derivation points out several weaknesses in previous justifications of both the hybrid and equilibrium systems and demonstrates the connection between the original and approximate master equations. Two simple examples illustrate situations in which these two approximate methods are applicable and demonstrate the two methods' efficiencies.

  19. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  20. Explanation of power law behavior of autoregressive conditional duration processes based on the random multiplicative process.

    PubMed

    Sato, Aki-Hiro

    2004-04-01

    Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.

  1. Effects of Lewis number on the statistics of the invariants of the velocity gradient tensor and local flow topologies in turbulent premixed flames

    NASA Astrophysics Data System (ADS)

    Wacks, Daniel; Konstantinou, Ilias; Chakraborty, Nilanjan

    2018-04-01

    The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar-turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable.

  2. Effects of Lewis number on the statistics of the invariants of the velocity gradient tensor and local flow topologies in turbulent premixed flames

    PubMed Central

    Konstantinou, Ilias; Chakraborty, Nilanjan

    2018-01-01

    The behaviours of the three invariants of the velocity gradient tensor and the resultant local flow topologies in turbulent premixed flames have been analysed using three-dimensional direct numerical simulation data for different values of the characteristic Lewis number ranging from 0.34 to 1.2. The results have been analysed to reveal the statistical behaviours of the invariants and the flow topologies conditional upon the reaction progress variable. The behaviours of the invariants have been explained in terms of the relative strengths of the thermal and mass diffusions, embodied by the influence of the Lewis number on turbulent premixed combustion. Similarly, the behaviours of the flow topologies have been explained in terms not only of the Lewis number but also of the likelihood of the occurrence of individual flow topologies in the different flame regions. Furthermore, the sensitivity of the joint probability density function of the second and third invariants and the joint probability density functions of the mean and Gaussian curvatures to the variation in Lewis number have similarly been examined. Finally, the dependences of the scalar--turbulence interaction term on augmented heat release and of the vortex-stretching term on flame-induced turbulence have been explained in terms of the Lewis number, flow topology and reaction progress variable. PMID:29740257

  3. Carbonic fluid inclusions in amphibolite-facies pelitic schists from Bodonch area, western Mongolian Altai

    NASA Astrophysics Data System (ADS)

    Zorigtkhuu, Oyun-Erdene; Tsunogae, Toshiaki; Dash, Batulzii

    We report first fluid inclusion data on amphibolite-facies pelitic schists from Bodonch area of western Mongolian Altai in the Central Asian Orogenic Belt. Three categories of fluid inclusions have been observed in quartz: dominant primary and secondary inclusions, and least dominant pseudosecondary inclusions. The melting temperatures of all the categories of inclusions lie in the narrow range of -57.5 °C to -56.6 °C, close to the triple point of pure CO2. Homogenization of fluids occurs into liquid phase at temperature between -33.3 °C to +19.4 °C, which convert into densities in the range of 0.78 g/cm3 to 1.09 g/cm3. The estimated CO2 isochores for primary and pseudosecondary high-density inclusions is broadly consistent with the peak metamorphic condition of the studied area (6.3-7.3 kbar at 655 °C). The results of this study, together with the primary and pseudosecondary nature of the inclusions, indicate CO2 was the dominant fluid component during the peak amphibolite-facies metamorphism of the study area. The examined quartz grains are texturally associated with biotite, kyanite and staurolite, which are regarded as high-grade minerals formed during prograde to peak metamorphism. Therefore quartz probably formed by high-grade metamorphism and the primary fluid inclusions trapped in the minerals probably preserve fluids at around peak metamorphism.

  4. Probability of occurrence of planetary ionosphere storms associated with the magnetosphere disturbance storm time events

    NASA Astrophysics Data System (ADS)

    Gulyaeva, T. L.; Arikan, F.; Stanislawska, I.

    2014-11-01

    The ionospheric W index allows to distinguish state of the ionosphere and plasmasphere from quiet conditions (W = 0 or ±1) to intense storm (W = ±4) ranging the plasma density enhancements (positive phase) or plasma density depletions (negative phase) regarding the quiet ionosphere. The global W index maps are produced for a period 1999-2014 from Global Ionospheric Maps of Total Electron Content, GIM-TEC, designed by Jet Propulson Laboratory, converted from geographic frame (-87.5:2.5:87.5° in latitude, -180:5:180° in longitude) to geomagnetic frame (-85:5:85° in magnetic latitude, -180:5:180° in magnetic longitude). The probability of occurrence of planetary ionosphere storm during the magnetic disturbance storm time, Dst, event is evaluated with the superposed epoch analysis for 77 intense storms (Dst ≤ -100 nT) and 230 moderate storms (-100 < Dst ≤ -50 nT) with start time, t0, defined at Dst storm main phase onset. It is found that the intensity of negative storm, iW-, exceeds the intensity of positive storm, iW+, by 1.5-2 times. An empirical formula of iW+ and iW- in terms of peak Dst is deduced exhibiting an opposite trends of relation of intensity of ionosphere-plasmasphere storm with regard to intensity of Dst storm.

  5. Seed storage conditions change the germination pattern of clonal growth plants in Mediterranean salt marshes

    USGS Publications Warehouse

    Espinar, J.L.; Garcia, L.V.; Clemente, L.

    2005-01-01

    The effect of salinity level and extended exposure to different salinity and flooding conditions on germination patterns of three saltmarsh clonal growth plants (Juncus subulatus, Scirpus litoralis, and S. maritimus) was studied. Seed exposure to extended flooding and saline conditions significantly affected the outcome of the germination process in a different, though predictable, way for each species, after favorable conditions for germination were restored. Tolerance of the germination process was related to the average salinity level measured during the growth/germination season at sites where established individuals of each species dominated the species cover. No relationship was found between salinity tolerance of the germination process and seed response to extended exposure to flooding and salinity conditions. The salinity response was significantly related to the conditions prevailing in the habitats of the respective species during the unfavorable (nongrowth/nongermination) season. Our results indicate that changes in salinity and hydrology while seeds are dormant affect the outcome of the seed-bank response, even when conditions at germination are identical. Because these environmental-history-dependent responses differentially affect seed germination, seedling density, and probably sexual recruitment in the studied and related species, these influences should be considered for wetland restoration and management.

  6. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  7. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  8. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    NASA Astrophysics Data System (ADS)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  9. Eruption dynamics of Hawaiian-style fountains: The case study of episode 1 of the Kilauea Iki 1959 eruption

    USGS Publications Warehouse

    Stovall, W.K.; Houghton, Bruce F.; Gonnermann, H.; Fagents, S.A.; Swanson, D.A.

    2011-01-01

    Hawaiian eruptions are characterized by fountains of gas and ejecta, sustained for hours to days that reach tens to hundreds of meters in height. Quantitative analysis of the pyroclastic products from the 1959 eruption of K??lauea Iki, K??lauea volcano, Hawai'i, provides insights into the processes occurring during typical Hawaiian fountaining activity. This short-lived but powerful eruption contained 17 fountaining episodes and produced a cone and tephra blanket as well as a lava lake that interacted with the vent and fountain during all but the first episode of the eruption, the focus of this paper. Microtextural analysis of Hawaiian fountaining products from this opening episode is used to infer vesiculation processes within the fountain and shallow conduit. Vesicle number densities for all clasts are high (106-107 cm-3). Post-fragmentation expansion of bubbles within the thermally-insulated fountain overprints the pre-fragmentation bubble populations, leading to a reduction in vesicle number density and increase in mean vesicle size. However, early quenched rims of some clasts, with vesicle number densities approaching 107 cm-3, are probably a valid approximation to magma conditions near fragmentation. The extent of clast evolution from low vesicle-to-melt ratio and corresponding high vesicle number density to higher vesicle-to-melt ratio and lower vesicle-number density corresponds to the length of residence time within the fountain. ?? 2010 Springer-Verlag.

  10. First-passage problems: A probabilistic dynamic analysis for degraded structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1990-01-01

    Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

  11. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  12. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  13. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.

  14. The large-scale gravitational bias from the quasi-linear regime.

    NASA Astrophysics Data System (ADS)

    Bernardeau, F.

    1996-08-01

    It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.

  15. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  16. Causal illusions in children when the outcome is frequent

    PubMed Central

    2017-01-01

    Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294

  17. Twin density of aragonite in molluscan shells characterized using X-ray diffraction and transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi

    2014-07-01

    {110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.

  18. Prediction of Conditional Probability of Survival After Surgery for Gastric Cancer: A Study Based on Eastern and Western Large Data Sets.

    PubMed

    Zhong, Qing; Chen, Qi-Yue; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Lu, Jun; Cao, Long-Long; Lin, Mi; Tu, Ru-Hong; Zheng, Chao-Hui; Huang, Chang-Ming

    2018-04-20

    The dynamic prognosis of patients who have undergone curative surgery for gastric cancer has yet to be reported. Our objective was to devise an accurate tool for predicting the conditional probability of survival for these patients. We analyzed 11,551 gastric cancer patients from the Surveillance, Epidemiology, and End Results database. Two-thirds of the patients were selected randomly for the development set and one-third for the validation set. Two nomograms were constructed to predict the conditional probability of overall survival and the conditional probability of disease-specific survival, using conditional survival methods. We then applied these nomograms to the 4,001 patients in the database from Fujian Medical University Union Hospital, Fuzhou, China, one of the most active Chinese institutes. The 5-year conditional probability of overall survival of the patients was 41.6% immediately after resection and increased to 52.8%, 68.2%, and 80.4% at 1, 2, and 3 years after gastrectomy. The 5-year conditional probability of disease-specific survival "increased" from 48.9% at the time of gastrectomy to 59.8%, 74.7%, and 85.5% for patients surviving 1, 2, and 3 years, respectively. Sex; race; age; depth of tumor invasion; lymph node metastasis; and tumor size, site, and grade were associated with overall survival and disease-specific survival (P <.05). Within the Surveillance, Epidemiology, and End Results validation set, the accuracy of the conditional probability of overall survival nomogram was 0.77, 0.81, 0.82, and 0.82 at 1, 3, 5, and 10 years after gastrectomy, respectively. Within the other validation set from the Fujian Medical University Union Hospital (n = 4,001), the accuracy of the conditional probability of overall survival nomogram was 0.76, 0.79, 0.77, and 0.77 at 1, 3, 5, and 10 years, respectively. The accuracy of the conditional probability of disease-specific survival model was also favorable. The calibration curve demonstrated good agreement between the predicted and observed survival rates. Based on the large Eastern and Western data sets, we developed and validated the first conditional nomogram for prediction of conditional probability of survival for patients with gastric cancer to allow consideration of the duration of survivorship. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  20. Composition and structure of the Chironomidae (Insecta: Diptera) community associated with bryophytes in a first-order stream in the Atlantic forest, Brazil.

    PubMed

    Rosa, B F J V; Dias-Silva, M V D; Alves, R G

    2013-02-01

    This study describes the structure of the Chironomidae community associated with bryophytes in a first-order stream located in a biological reserve of the Atlantic Forest, during two seasons. Samples of bryophytes adhered to rocks along a 100-m stretch of the stream were removed with a metal blade, and 200-mL pots were filled with the samples. The numerical density (individuals per gram of dry weight), Shannon's diversity index, Pielou's evenness index, the dominance index (DI), and estimated richness were calculated for each collection period (dry and rainy). Linear regression analysis was employed to test the existence of a correlation between rainfall and the individual's density and richness. The high numerical density and richness of Chironomidae taxa observed are probably related to the peculiar conditions of the bryophyte habitat. The retention of larvae during periods of higher rainfall contributed to the high density and richness of Chironomidae larvae. The rarefaction analysis showed higher richness in the rainy season related to the greater retention of food particles. The data from this study show that bryophytes provide stable habitats for the colonization by and refuge of Chironomidae larvae, mainly under conductions of faster water flow and higher precipitation.

Top