Sample records for assumed probability density

  1. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  2. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  3. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  4. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  5. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  6. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  7. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  8. Modeling turbulent/chemistry interactions using assumed pdf methods

    NASA Technical Reports Server (NTRS)

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.

    1992-01-01

    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  9. Modeling of turbulent supersonic H2-air combustion with an improved joint beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1991-01-01

    Attempts at modeling recent experiments of Cheng et al. indicated that discrepancies between theory and experiment can be a result of the form of assumed probability density function (PDF) and/or the turbulence model employed. Improvements in both the form of the assumed PDF and the turbulence model are presented. The results are again used to compare with measurements. Initial comparisons are encouraging.

  10. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  11. Gravitational wave hotspots: Ranking potential locations of single-source gravitational wave emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Joseph; Polin, Abigail; Lommen, Andrea

    2014-03-20

    The steadily improving sensitivity of pulsar timing arrays (PTAs) suggests that gravitational waves (GWs) from supermassive black hole binary (SMBHB) systems in the nearby universe will be detectable sometime during the next decade. Currently, PTAs assume an equal probability of detection from every sky position, but as evidence grows for a non-isotropic distribution of sources, is there a most likely sky position for a detectable single source of GWs? In this paper, a collection of Galactic catalogs is used to calculate various metrics related to the detectability of a single GW source resolvable above a GW background, assuming that everymore » galaxy has the same probability of containing an SMBHB. Our analyses of these data reveal small probabilities that one of these sources is currently in the PTA band, but as sensitivity is improved regions of consistent probability density are found in predictable locations, specifically around local galaxy clusters.« less

  12. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  13. A Non-Parametric Probability Density Estimator and Some Applications.

    DTIC Science & Technology

    1984-05-01

    distributions, which are assumed to be representa- tive of platykurtic , mesokurtic, and leptokurtic distribu- tions in general. The dissertation is... platykurtic distributions. Consider, for example, the uniform distribution shown in Figure 4. 34 o . 1., Figure 4 -Sensitivity to Support Estimation The...results of the density function comparisons indicate that the new estimator is clearly -Z superior for platykurtic distributions, equal to the best 59

  14. Circuit analysis method for thin-film solar cell modules

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.

  15. Chemically reacting supersonic flow calculation using an assumed PDF model

    NASA Technical Reports Server (NTRS)

    Farshchi, M.

    1990-01-01

    This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.

  16. Application of multivariate Gaussian detection theory to known non-Gaussian probability density functions

    NASA Astrophysics Data System (ADS)

    Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.

    1995-06-01

    A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.

  17. Bragg-cell receiver study

    NASA Technical Reports Server (NTRS)

    Wilson, Lonnie A.

    1987-01-01

    Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.

  18. Single-molecule stochastic times in a reversible bimolecular reaction

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Valleriani, Angelo

    2012-08-01

    In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.

  19. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  20. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  1. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2010-01-01

    When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's

  2. Interstellar abundances and depletions inferred from observations of neutral atoms

    NASA Technical Reports Server (NTRS)

    Snow, T. P.

    1984-01-01

    Data on neutral atomic species are analyzed for the purpose of inferring relative elemental abundances and depletions in diffuse cloud cores, where it is assumed that densities are enhanced in comparison with mean densities over integrated lines of sight. Column densities of neutral atoms are compared to yield relative column densities of singly ionized species, which are assumed dominant in cloud cores. This paper incorporates a survey of literature data on neutral atomic abundances with the result that no systematic enhancement in the depletions of calcium or iron in cloud cores is found, except for zeta Ophiuchi. This may imply that depletions are not influenced by density, but other data argue against this interpretation. It is concluded either that in general all elements are depleted together in dense regions so that their relative abundances remain constant, or that typical diffuse clouds do not have significant cores, but instead are reasonably homogeneous. The data show a probable correlation between cloud-core depletion and hydrogen-molecular fraction, supporting the assumption that overall depletions are a function of density.

  3. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  4. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

  5. Generalized fish life-cycle poplulation model and computer program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeAngelis, D. L.; Van Winkle, W.; Christensen, S. W.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexuallymore » mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition.« less

  6. On the Hardness of Subset Sum Problem from Different Intervals

    NASA Astrophysics Data System (ADS)

    Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke

    The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.

  7. Derived distribution of floods based on the concept of partial area coverage with a climatic appeal

    NASA Astrophysics Data System (ADS)

    Iacobellis, Vito; Fiorentino, Mauro

    2000-02-01

    A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.

  8. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  9. Quantifying Uncertainties in the Thermo-Mechanical Properties of Particulate Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Mital, Subodh K.; Murthy, Pappu L. N.

    1999-01-01

    The present paper reports results from a computational simulation of probabilistic particulate reinforced composite behavior. The approach consists use of simplified micromechanics of particulate reinforced composites together with a Fast Probability Integration (FPI) technique. Sample results are presented for a Al/SiC(sub p)(silicon carbide particles in aluminum matrix) composite. The probability density functions for composite moduli, thermal expansion coefficient and thermal conductivities along with their sensitivity factors are computed. The effect of different assumed distributions and the effect of reducing scatter in constituent properties on the thermal expansion coefficient are also evaluated. The variations in the constituent properties that directly effect these composite properties are accounted for by assumed probabilistic distributions. The results show that the present technique provides valuable information about the scatter in composite properties and sensitivity factors, which are useful to test or design engineers.

  10. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  11. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  12. A Bayesian approach to modeling 2D gravity data using polygon states

    NASA Astrophysics Data System (ADS)

    Titus, W. J.; Titus, S.; Davis, J. R.

    2015-12-01

    We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.

  13. Combined natural gamma ray spectral/litho-density measurements applied to complex lithologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirein, J.A.; Gardner, J.S.; Watson, J.T.

    1982-09-01

    Well log data has long been used to provide lithological descriptions of complex formations. Historically, most of the approaches used have been restrictive because they assumed fixed, known, and distinct lithologies for specified zones. The approach described in this paper attempts to alleviate this restriction by estimating the ''probability of a model'' for the models suggested as most likely by the reservoir geology. Lithological variables are simultaneously estimated from response equations for each model and combined in accordance with the probability of each respective model. The initial application of this approach has been the estimation of calcite, quartz, and dolomitemore » in the presence of clays, feldspars, anhydrite, or salt. Estimations were made by using natural gamma ray spectra, photoelectric effect, bulk density, and neutron porosity information. For each model, response equations and parameter selections are obtained from the thorium vs potassium crossplot and the apparent matrix density vs apparent volumetric photoelectric cross section crossplot. The thorium and potassium response equations are used to estimate the volumes of clay and feldspar. The apparent matrix density and volumetric cross section response equations can then be corrected for the presence of clay and feldspar. A test ensures that the clay correction lies within the limits for the assumed lithology model. Results are presented for varying lithologies. For one test well, 6,000 feet were processed in a single pass, without zoning and without adjusting more than one parameter pick. The program recognized sand, limestone, dolomite, clay, feldspar, anhydrite, and salt without analyst intervention.« less

  14. Probability density function evolution of power systems subject to stochastic variation of renewable energy

    NASA Astrophysics Data System (ADS)

    Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.

    2018-05-01

    As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.

  15. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  16. The effects of vent location, event scale and time forecasts on pyroclastic density current hazard maps at Campi Flegrei caldera (Italy)

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano

    2017-09-01

    This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.

  17. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  18. A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-06-13

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at

  19. On Algorithms for Generating Computationally Simple Piecewise Linear Classifiers

    DTIC Science & Technology

    1989-05-01

    suffers. - Waveform classification, e.g. speech recognition, seismic analysis (i.e. discrimination between earthquakes and nuclear explosions), target...assuming Gaussian distributions (B-G) d) Bayes classifier with probability densities estimated with the k-N-N method (B- kNN ) e) The -arest neighbour...range of classifiers are chosen including a fast, easy computable and often used classifier (B-G), reliable and complex classifiers (B- kNN and NNR

  20. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  1. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  2. Public opinion by a poll process: model study and Bayesian view

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Kim, Yong Woon

    2018-05-01

    We study the formation of public opinion in a poll process where the current score is open to the public. The voters are assumed to vote probabilistically for or against their own preference considering the group opinion collected up to then in the score. The poll-score probability is found to follow the beta distribution in the large polls limit. We demonstrate that various poll results, even those contradictory to the population preference, are possible with non-zero probability density and that such deviations are readily triggered by initial bias. It is mentioned that our poll model can be understood in the Bayesian viewpoint.

  3. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  4. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  5. Numerical solutions of the complete Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1993-01-01

    The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.

  6. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  7. Fusion of Imaging and Inertial Sensors for Navigation

    DTIC Science & Technology

    2006-09-01

    combat operations. The Global Positioning System (GPS) was fielded in the 1980’s and first used for precision navigation and targeting in combat...equations [37]. Consider the homogeneous nonlinear differential equation ẋ(t) = f [x(t),u(t), t] ; x(t0) = x0 (2.4) For a given input function , u0(t...differential equation is a time-varying probability density function . The Kalman filter derivation assumes Gaussian distributions for all random

  8. Technical Report 1205: A Simple Probabilistic Combat Model

    DTIC Science & Technology

    2016-07-08

    This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a

  9. Probabilistic structural analysis of aerospace components using NESSUS

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Nagpal, Vinod K.; Chamis, Christos C.

    1988-01-01

    Probabilistic structural analysis of a Space Shuttle main engine turbopump blade is conducted using the computer code NESSUS (numerical evaluation of stochastic structures under stress). The goal of the analysis is to derive probabilistic characteristics of blade response given probabilistic descriptions of uncertainties in blade geometry, material properties, and temperature and pressure distributions. Probability densities are derived for critical blade responses. Risk assessment and failure life analysis is conducted assuming different failure models.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prentice, John K.; Gardner, David Randall

    A methodology was developed for computing the probability that the sensor dart for the 'Near Real-Time Site Characterization for Assured HDBT Defeat' Grand-Challenge LDRD project will survive deployment over a forested region. The probability can be decomposed into three approximately independent probabilities that account for forest coverage, branch density and the physics of an impact between the dart and a tree branch. The probability that a dart survives an impact with a tree branch was determined from the deflection induced by the impact. If a dart that was deflected so that it impacted the ground at an angle of attackmore » exceeding a user-specified, threshold value, the dart was assumed to not survive the impact with the branch; otherwise it was assumed to have survived. A computer code was developed for calculating dart angle of attack at impact with the ground and a Monte Carlo scheme was used to calculate the probability distribution of a sensor dart surviving an impact with a branch as a function of branch radius, length, and height from the ground. Both an early prototype design and the current dart design were used in these studies. As a general rule of thumb, it we observed that for reasonably generic trees and for a threshold angle of attack of 5{sup o} (which is conservative for dart survival), the probability of reaching the ground with an angle of attack less than the threshold is on the order of 30% for the prototype dart design and 60% for the current dart design, though these numbers should be treated with some caution.« less

  11. Spectral characteristics of convolutionally coded digital signals

    NASA Technical Reports Server (NTRS)

    Divsalar, D.

    1979-01-01

    The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.

  12. A hidden Markov model approach to neuron firing patterns.

    PubMed

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-11-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.

  13. Does probability of occurrence relate to population dynamics?

    USGS Publications Warehouse

    Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.

    2014-01-01

    Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.

  14. DENSITY: software for analysing capture-recapture data from passive detector arrays

    USGS Publications Warehouse

    Efford, M.G.; Dawson, D.K.; Robbins, C.S.

    2004-01-01

    A general computer-intensive method is described for fitting spatial detection functions to capture-recapture data from arrays of passive detectors such as live traps and mist nets. The method is used to estimate the population density of 10 species of breeding birds sampled by mist-netting in deciduous forest at Patuxent Research Refuge, Laurel, Maryland, U.S.A., from 1961 to 1972. Total density (9.9 ? 0.6 ha-1 mean ? SE) appeared to decline over time (slope -0.41 ? 0.15 ha-1y-1). The mean precision of annual estimates for all 10 species pooled was acceptable (CV(D) = 14%). Spatial analysis of closed-population capture-recapture data highlighted deficiencies in non-spatial methodologies. For example, effective trapping area cannot be assumed constant when detection probability is variable. Simulation may be used to evaluate alternative designs for mist net arrays where density estimation is a study goal.

  15. Axial and Radial Compression of Ion Beams.

    DTIC Science & Technology

    1980-03-01

    is found to be Jbn/Jb - 1 - 5 x 10 for proposed fusion systems. Since this low level of noise is probably not achievable, some hollowing out of the...that the ion beam was assumed to have about 1 cm2 cross section that is a little smaller than the confining low density plasma. The plasma was chosen...after many ripple wavelengths due to the weak dependence of the betatron frequency on the small spread in ion injection angles. Other mechanisms, such

  16. Explaining Zipf's law via a mental lexicon

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Deng, Weibing; Wang, Q. A.

    2013-12-01

    Zipf's law is the major regularity of statistical linguistics that has served as a prototype for rank-frequency relations and scaling laws in natural sciences. Here we show that Zipf's law—together with its applicability for a single text and its generalizations to high and low frequencies including hapax legomena—can be derived from assuming that the words are drawn into the text with random probabilities. Their a priori density relates, via the Bayesian statistics, to the mental lexicon of the author who produced the text.

  17. Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Hassan, H. A.

    1993-01-01

    Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.

  18. A Bayesian Approach to Magnetic Moment Determination Using μSR

    NASA Astrophysics Data System (ADS)

    Blundell, S. J.; Steele, A. J.; Lancaster, T.; Wright, J. D.; Pratt, F. L.

    A significant challenge in zero-field μSR experiments arises from the uncertainty in the muon site. It is possible to calculate the dipole field (and hence precession frequency v) at any particular site given the magnetic moment μ and magnetic structure. One can also evaluate f(v), the probability distribution function of v assuming that the muon site can be anywhere within the unit cell with equal probability, excluding physically forbidden sites. Since v is obtained from experiment, what we would like to know is g(μjv), the probability density function of μ given the observed v. This can be obtained from our calculated f(v/μ) using Bayes' theorem. We describe an approach to this problem which we have used to extract information about real systems including a low-moment osmate compound, a family of molecular magnets, and an iron-arsenide compound.

  19. A hidden Markov model approach to neuron firing patterns.

    PubMed Central

    Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G

    1996-01-01

    Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581

  20. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  1. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  2. Timescales of isotropic and anisotropic cluster collapse

    NASA Astrophysics Data System (ADS)

    Bartelmann, M.; Ehlers, J.; Schneider, P.

    1993-12-01

    From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.

  3. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  4. Interrelated structure of high altitude atmospheric profiles

    NASA Technical Reports Server (NTRS)

    Engler, N. A.; Goldschmidt, M. A.

    1972-01-01

    A preliminary development of a mathematical model to compute probabilities of thermodynamic profiles is presented. The model assumes an exponential expression for pressure and utilizes the hydrostatic law and equation of state in the determination of density and temperature. It is shown that each thermodynamic variable can be factored into the produce of steady state and perturbation functions. The steady state functions have profiles similar to those of the 1962 standard atmosphere while the perturbation functions oscillate about 1. Limitations of the model and recommendations for future work are presented.

  5. A Stochastic Super-Exponential Growth Model for Population Dynamics

    NASA Astrophysics Data System (ADS)

    Avila, P.; Rekker, A.

    2010-11-01

    A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.

  6. Structural analysis of vibroacoustical processes

    NASA Technical Reports Server (NTRS)

    Gromov, A. P.; Myasnikov, L. L.; Myasnikova, Y. N.; Finagin, B. A.

    1973-01-01

    The method of automatic identification of acoustical signals, by means of the segmentation was used to investigate noises and vibrations in machines and mechanisms, for cybernetic diagnostics. The structural analysis consists of presentation of a noise or vibroacoustical signal as a sequence of segments, determined by the time quantization, in which each segment is characterized by specific spectral characteristics. The structural spectrum is plotted as a histogram of the segments, also as a relation of the probability density of appearance of a segment to the segment type. It is assumed that the conditions of ergodic processes are maintained.

  7. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Shu-Xia; Zhang, Yu-Ru; Research Group PLASMANT, Department of Chemistry, University of Antwerp, Universiteitsplein 1, B-2610 Antwerp

    A hybrid model is used to investigate the fragmentation of C{sub 4}F{sub 8} inductive discharges. Indeed, the resulting reactive species are crucial for the optimization of the Si-based etching process, since they determine the mechanisms of fluorination, polymerization, and sputtering. In this paper, we present the dissociation degree, the density ratio of F vs. C{sub x}F{sub y} (i.e., fluorocarbon (fc) neutrals), the neutral vs. positive ion density ratio, details on the neutral and ion components, and fractions of various fc neutrals (or ions) in the total fc neutral (or ion) density in a C{sub 4}F{sub 8} inductively coupled plasma source,more » as well as the effect of pressure and power on these results. To analyze the fragmentation behavior, the electron density and temperature and electron energy probability function (EEPF) are investigated. Moreover, the main electron-impact generation sources for all considered neutrals and ions are determined from the complicated C{sub 4}F{sub 8} reaction set used in the model. The C{sub 4}F{sub 8} plasma fragmentation is explained, taking into account many factors, such as the EEPF characteristics, the dominance of primary and secondary processes, and the thresholds of dissociation and ionization. The simulation results are compared with experiments from literature, and reasonable agreement is obtained. Some discrepancies are observed, which can probably be attributed to the simplified polymer surface kinetics assumed in the model.« less

  9. Bulk plasma fragmentation in a C4F8 inductively coupled plasma: A hybrid modeling study

    NASA Astrophysics Data System (ADS)

    Zhao, Shu-Xia; Zhang, Yu-Ru; Gao, Fei; Wang, You-Nian; Bogaerts, Annemie

    2015-06-01

    A hybrid model is used to investigate the fragmentation of C4F8 inductive discharges. Indeed, the resulting reactive species are crucial for the optimization of the Si-based etching process, since they determine the mechanisms of fluorination, polymerization, and sputtering. In this paper, we present the dissociation degree, the density ratio of F vs. CxFy (i.e., fluorocarbon (fc) neutrals), the neutral vs. positive ion density ratio, details on the neutral and ion components, and fractions of various fc neutrals (or ions) in the total fc neutral (or ion) density in a C4F8 inductively coupled plasma source, as well as the effect of pressure and power on these results. To analyze the fragmentation behavior, the electron density and temperature and electron energy probability function (EEPF) are investigated. Moreover, the main electron-impact generation sources for all considered neutrals and ions are determined from the complicated C4F8 reaction set used in the model. The C4F8 plasma fragmentation is explained, taking into account many factors, such as the EEPF characteristics, the dominance of primary and secondary processes, and the thresholds of dissociation and ionization. The simulation results are compared with experiments from literature, and reasonable agreement is obtained. Some discrepancies are observed, which can probably be attributed to the simplified polymer surface kinetics assumed in the model.

  10. The H I-to-H2 Transition in a Turbulent Medium

    NASA Astrophysics Data System (ADS)

    Bialy, Shmuel; Burkhart, Blakesley; Sternberg, Amiel

    2017-07-01

    We study the effect of density fluctuations induced by turbulence on the H I/H2 structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas and chemical H I/H2 balance calculations. We derive atomic-to-molecular density profiles and the H I column density probability density function (PDF) assuming chemical equilibrium. We find that, while the H I/H2 density profiles are strongly perturbed in turbulent gas, the mean H I column density is well approximated by the uniform-density analytic formula of Sternberg et al. The PDF width depends on (a) the radiation intensity-to-mean density ratio, (b) the sonic Mach number, and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the H I PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of H I in the Perseus molecular cloud. We show that a narrow observed H I PDF may imply small-scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving.

  11. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  12. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Höft, J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  13. Parameterizing deep convection using the assumed probability density function method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storer, R. L.; Griffin, B. M.; Hoft, Jan

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  14. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  15. Inferring extinction risks from sighting records.

    PubMed

    Thompson, C J; Lee, T E; Stone, L; McCarthy, M A; Burgman, M A

    2013-12-07

    Estimating the probability that a species is extinct based on historical sighting records is important when deciding how much effort and money to invest in conservation policies. The framework we offer is more general than others in the literature to date. Our formulation allows for definite and uncertain observations, and thus better accommodates the realities of sighting record quality. Typically, the probability of observing a species given it is extant/extinct is challenging to define, especially when the possibility of a false observation is included. As such, we assume that observation probabilities derive from a representative probability density function. We incorporate this randomness in two different ways ("quenched" versus "annealed") using a framework that is equivalent to a Bayes formulation. The two methods can lead to significantly different estimates for extinction. In the case of definite sightings only, we provide an explicit deterministic calculation (in which observation probabilities are point estimates). Furthermore, our formulation replicates previous work in certain limiting cases. In the case of uncertain sightings, we allow for the possibility of several independent observational types (specimen, photographs, etc.). The method is applied to the Caribbean monk seal, Monachus tropicalis (which has only definite sightings), and synthetic data, with uncertain sightings. © 2013 Elsevier Ltd. All rights reserved.

  16. Halo correlations in nonlinear cosmic density fields

    NASA Astrophysics Data System (ADS)

    Bernardeau, F.; Schaeffer, R.

    1999-09-01

    The question we address in this paper is the determination of the correlation properties of the dark matter halos appearing in cosmic density fields once they underwent a strongly nonlinear evolution induced by gravitational dynamics. A series of previous works have given indications that kind of non-Gaussian features are induced by nonlinear evolution in term of the high-order correlation functions. Assuming such patterns for the matter field, i.e. that the high-order correlation functions behave as products of two-body correlation functions, we derive the correlation properties of the halos, that are assumed to represent the correlation properties of galaxies or clusters. The hierarchical pattern originally induced by gravity is shown to be conserved for the halos. The strength of their correlations at any order varies, however, but is found to depend only on their internal properties, namely on the parameter x~ m/r(3-gamma ) where m is the mass of the halo, r its size and gamma is the power law index of the two-body correlation function. This internal parameter is seen to be close to the depth of the internal potential well of virialized objects. We were able to derive the explicit form of the generating function of the moments of the halo counts probability distribution function. In particular we show explicitly that, generically, S_P(x)-> P(P-2) in the rare halo limit. Various illustrations of our general results are presented. As a function of the properties of the underlying matter field, we construct the count probabilities for halos and in particular discuss the halo void probability. We evaluate the dependence of the halo mass function on the environment: within clusters, hierarchical clustering implies the higher masses are favored. These properties solely arise from what is a natural bias (ie, naturally induced by gravity) between the observed objects and the unseen matter field, and how it manifests itself depending on which selection effects are imposed.

  17. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  18. A phenomenological pulsar model

    NASA Technical Reports Server (NTRS)

    Michel, F. C.

    1978-01-01

    Particle injection energies and rates previously calculated for the stellar wind generation by rotating magnetized neutron stars are adopted. It is assumed that the ambient space-charge density being emitted to form this wind is bunched. These considerations immediately place the coherent radio frequency luminosity from such bunches near 10 to the 28th erg/s for typical pulsar parameters. A comparable amount of incoherent radiation is emitted for typical (1 second) pulsars. For very rapid pulsars, however, the latter component grows more rapidly than the available energy sources. The comparatively low radio luminosity of the Crab and Vela pulsars is attributed to both components being limited in the same ratio. The incoherent radiation essentially has a synchotron spectrum and extends to gamma-ray energies; consequently the small part of the total luminosity that is at optical wavelengths is unobservable. Assuming full coherence at all wavelengths short of a critical length gives a spectral index for the flux density of -8/3 at higher frequencies. The finite energy available from the injected particles would force the spectrum to roll over below about 100 MHz, although intrinsic morphological factors probably enter for any specific pulsar as well.

  19. Monte Carlo PDF method for turbulent reacting flow in a jet-stirred reactor

    NASA Astrophysics Data System (ADS)

    Roekaerts, D.

    1992-01-01

    A stochastic algorithm for the solution of the modeled scalar probability density function (PDF) transport equation for single-phase turbulent reacting flow is described. Cylindrical symmetry is assumed. The PDF is represented by ensembles of N representative values of the thermochemical variables in each cell of a nonuniform finite-difference grid and operations on these elements representing convection, diffusion, mixing and reaction are derived. A simplified model and solution algorithm which neglects the influence of turbulent fluctuations on mean reaction rates is also described. Both algorithms are applied to a selectivity problem in a real reactor.

  20. Dielectric response in Bloch’s hydrodynamic model of an electron-ion plasma

    NASA Astrophysics Data System (ADS)

    Ishikawa, K.; Felderhof, B. U.

    The linear response of an electron-ion plasma to an applied oscillating electric field is studied within the framework of Bloch’s classical hydrodynamic model. The ions are assumed to be fixed in space and distributed according to a known probability distribution. The linearized equations of motion for electron density and flow velocity are studied with the aid of a multiple scattering analysis and cluster expansion. This allows systematic reduction of the many-ion problem to a composition of few-ion problems, and shows how the longitudinal dielectric response function can in principle be calculated.

  1. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  2. Long-term volcanic hazard assessment on El Hierro (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Becerril, L.; Bartolini, S.; Sobradelo, R.; Martí, J.; Morales, J. M.; Galindo, I.

    2014-07-01

    Long-term hazard assessment, one of the bastions of risk-mitigation programs, is required for land-use planning and for developing emergency plans. To ensure quality and representative results, long-term volcanic hazard assessment requires several sequential steps to be completed, which include the compilation of geological and volcanological information, the characterisation of past eruptions, spatial and temporal probabilistic studies, and the simulation of different eruptive scenarios. Despite being a densely populated active volcanic region that receives millions of visitors per year, no systematic hazard assessment has ever been conducted on the Canary Islands. In this paper we focus our attention on El Hierro, the youngest of the Canary Islands and the most recently affected by an eruption. We analyse the past eruptive activity to determine the spatial and temporal probability, and likely style of a future eruption on the island, i.e. the where, when and how. By studying the past eruptive behaviour of the island and assuming that future eruptive patterns will be similar, we aim to identify the most likely volcanic scenarios and corresponding hazards, which include lava flows, pyroclastic fallout and pyroclastic density currents (PDCs). Finally, we estimate their probability of occurrence. The end result, through the combination of the most probable scenarios (lava flows, pyroclastic density currents and ashfall), is the first qualitative integrated volcanic hazard map of the island.

  3. The H i-to-H{sub 2} Transition in a Turbulent Medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bialy, Shmuel; Sternberg, Amiel; Burkhart, Blakesley, E-mail: shmuelbi@mail.tau.ac.il

    2017-07-10

    We study the effect of density fluctuations induced by turbulence on the H i/H{sub 2} structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas and chemical H i/H{sub 2} balance calculations. We derive atomic-to-molecular density profiles and the H i column density probability density function (PDF) assuming chemical equilibrium. We find that, while the H i/H{sub 2} density profiles are strongly perturbed in turbulent gas, the mean H i column density is well approximated by the uniform-density analytic formula of Sternberg et al. The PDF width depends onmore » (a) the radiation intensity–to–mean density ratio, (b) the sonic Mach number, and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the H i PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of H i in the Perseus molecular cloud. We show that a narrow observed H i PDF may imply small-scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving.« less

  4. Fluctuations and intermittent poloidal transport in a simple toroidal plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goud, T. S.; Ganesh, R.; Saxena, Y. C.

    In a simple magnetized toroidal plasma, fluctuation induced poloidal flux is found to be significant in magnitude. The probability distribution function of the fluctuation induced poloidal flux is observed to be strongly non-Gaussian in nature; however, in some cases, the distribution shows good agreement with the analytical form [Carreras et al., Phys. Plasmas 3, 2664 (1996)], assuming a coupling between the near Gaussian density and poloidal velocity fluctuations. The observed non-Gaussian nature of the fluctuation induced poloidal flux and other plasma parameters such as density and fluctuating poloidal velocity in this device is due to intermittent and bursty nature ofmore » poloidal transport. In the simple magnetized torus used here, such an intermittent fluctuation induced poloidal flux is found to play a crucial role in generating the poloidal flow.« less

  5. The kinetic temperature in the interior of the Xi Ophiuchi cloud from Copernicus observations of interstellar C2

    NASA Technical Reports Server (NTRS)

    Snow, T. P., Jr.

    1978-01-01

    Satellite observations of transitions of C2 at 2312 Angstroms in the spectrum of Xi Ophiuchi were carried out to evaluate the kinetic temperature of the interior cloud. A column density of 1.22 x 10 to the 12th per sq cm is derived from an absorption feature at the 4 sigma level of significance at the position of the R(0) line. This would imply a rotational temperature of not more than 22 K, with a more probable value of less than 16 K. Since total column density (3.2 x 10 to the 12th per sq cm) is found to be lower by a factor of approximately 4 than that which had been previously reported, substantial photo-dissociation of C2 is assumed.

  6. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  7. The effect of unresolved contaminant stars on the cross-matching of photometric catalogues

    NASA Astrophysics Data System (ADS)

    Wilson, Tom J.; Naylor, Tim

    2017-07-01

    A fundamental process in astrophysics is the matching of two photometric catalogues. It is crucial that the correct objects be paired, and that their photometry does not suffer from any spurious additional flux. We compare the positions of sources in Wide-field Infrared Survey Explorer (WISE), INT Photometric H α Survey, Two Micron All Sky Survey and AAVSO Photometric All Sky Survey with Gaia Data Release 1 astrometric positions. We find that the separations are described by a combination of a Gaussian distribution, wider than naively assumed based on their quoted uncertainties, and a large wing, which some authors ascribe to proper motions. We show that this is caused by flux contamination from blended stars not treated separately. We provide linear fits between the quoted Gaussian uncertainty and the core fit to the separation distributions. We show that at least one in three of the stars in the faint half of a given catalogue will suffer from flux contamination above the 1 per cent level when the density of catalogue objects per point spread function area is above approximately 0.005. This has important implications for the creation of composite catalogues. It is important for any closest neighbour matches as there will be a given fraction of matches that are flux contaminated, while some matches will be missed due to significant astrometric perturbation by faint contaminants. In the case of probability-based matching, this contamination affects the probability density function of matches as a function of distance. This effect results in up to 50 per cent fewer counterparts being returned as matches, assuming Gaussian astrometric uncertainties for WISE-Gaia matching in crowded Galactic plane regions, compared with a closest neighbour match.

  8. Dark matter and cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, D.N.

    1992-03-01

    The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold`` and ``hot`` non-baryonic candidates is shown to depend on the assumed ``seeds`` that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less

  9. Dark matter and cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, D.N.

    1992-03-01

    The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold'' and hot'' non-baryonic candidates is shown to depend on the assumed seeds'' that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less

  10. Dark matter and cosmology

    NASA Astrophysics Data System (ADS)

    Schramm, David N.

    1992-07-01

    The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the Ω = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold'' and ``hot'' non-baryonic candidates is shown to depend on the assumed ``seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.

  11. Dark matter and cosmology

    NASA Astrophysics Data System (ADS)

    Schramm, D. N.

    1992-03-01

    The cosmological dark matter problem is reviewed. The Big Bang nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the omega = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between 'cold' and 'hot' non-baryonic candidates is shown to depend on the assumed 'seeds' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages, and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.

  12. Modeling the subfilter scalar variance for large eddy simulation in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Cheminet, Adam; Blanquart, Guillaume

    2011-11-01

    Static and dynamic model for the subfilter scalar variance in homogeneous isotropic turbulence are investigated using direct numerical simulations (DNS) of a lineary forced passive scalar field. First, we introduce a new scalar forcing technique conditioned only on the scalar field which allows the fluctuating scalar field to reach a statistically stationary state. Statistical properties, including 2nd and 3rd statistical moments, spectra, and probability density functions of the scalar field have been analyzed. Using this technique, we performed constant density and variable density DNS of scalar mixing in isotropic turbulence. The results are used in an a-priori study of scalar variance models. Emphasis is placed on further studying the dynamic model introduced by G. Balarac, H. Pitsch and V. Raman [Phys. Fluids 20, (2008)]. Scalar variance models based on Bedford and Yeo's expansion are accurate for small filter width but errors arise in the inertial subrange. Results suggest that a constant coefficient computed from an assumed Kolmogorov spectrum is often sufficient to predict the subfilter scalar variance.

  13. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  14. Natural environment application for NASP-X-30 design and mission planning

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.; Hill, C. K.; Brown, S. C.; Batts, G. W.

    1993-01-01

    The NASA/MSFC Mission Analysis Program has recently been utilized in various National Aero-Space Plane (NASP) mission and operational planning scenarios. This paper focuses on presenting various atmospheric constraint statistics based on assumed NASP mission phases using established natural environment design, parametric, threshold values. Probabilities of no-go are calculated using atmospheric parameters such as temperature, humidity, density altitude, peak/steady-state winds, cloud cover/ceiling, thunderstorms, and precipitation. The program although developed to evaluate test or operational missions after flight constraints have been established, can provide valuable information in the design phase of the NASP X-30 program. Inputting the design values as flight constraints the Mission Analysis Program returns the probability of no-go, or launch delay, by hour by month. This output tells the X-30 program manager whether the design values are stringent enough to meet his required test flight schedules.

  15. Predictions of the causal entropic principle for environmental conditions of the universe

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Frey, Andrew R.; Holder, Gilbert

    2008-03-01

    The causal entropic principle has been proposed as an alternative to the anthropic principle for understanding the magnitude of the cosmological constant. In this approach, the probability to create observers is assumed to be proportional to the entropy production ΔS in a maximal causally connected region—the causal diamond. We improve on the original treatment by better quantifying the entropy production due to stars, using an analytic model for the star formation history which accurately accounts for changes in cosmological parameters. We calculate the dependence of ΔS on the density contrast Q=δρ/ρ, and find that our universe is much closer to the most probable value of Q than in the usual anthropic approach and that probabilities are relatively weakly dependent on this amplitude. In addition, we make first estimates of the dependence of ΔS on the baryon fraction and overall matter abundance. Finally, we also explore the possibility that decays of dark matter, suggested by various observed gamma ray excesses, might produce a comparable amount of entropy to stars.

  16. A possible loophole in the theorem of Bell.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.

  17. Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference.

    PubMed

    Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing

    2016-01-01

    In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications.

  18. Outage Probability of MRC for κ-μ Shadowed Fading Channels under Co-Channel Interference

    PubMed Central

    Chen, Changfang; Shu, Minglei; Wang, Yinglong; Yang, Ming; Zhang, Chongqing

    2016-01-01

    In this paper, exact closed-form expressions are derived for the outage probability (OP) of the maximal ratio combining (MRC) scheme in the κ-μ shadowed fading channels, in which both the independent and correlated shadowing components are considered. The scenario assumes the received desired signals are corrupted by the independent Rayleigh-faded co-channel interference (CCI) and background white Gaussian noise. To this end, first, the probability density function (PDF) of the κ-μ shadowed fading distribution is obtained in the form of a power series. Then the incomplete generalized moment-generating function (IG-MGF) of the received signal-to-interference-plus-noise ratio (SINR) is derived in the closed form. By using the IG-MGF results, closed-form expressions for the OP of MRC scheme are obtained over the κ-μ shadowed fading channels. Simulation results are included to validate the correctness of the analytical derivations. These new statistical results can be applied to the modeling and analysis of several wireless communication systems, such as body centric communications. PMID:27851817

  19. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  20. Relevance of ancestral surname identification in pedigrees of Afrikaner families with familial hypercholesterolaemia.

    PubMed

    Torrington, M; Brink, P A

    1990-03-17

    Familial hypercholesterolaemia (FH) is more prevalent among Afrikaans-speaking individuals in South Africa then elsewhere. Founder effects have been suggested as an explanation. A study was undertaken that demonstrated ancestral links for a low-density lipoprotein receptor allele, haplotype No. 2, in the two lines of descent identified and 2 other known pedigrees with the same haplotype. Probable founder members for this haplotype are identified. These differ from the founder members assumed to be responsible for a majority of FH. A minor founder effect is suggested. Explanations are given for the apparent lesser prevalence of the second haplotype associated with FH.

  1. Process, System, Causality, and Quantum Mechanics: A Psychoanalysis of Animal Faith

    NASA Astrophysics Data System (ADS)

    Etter, Tom; Noyes, H. Pierre

    We shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution J on a set of random variables containing x and y, define a link between x and y to be the condition x=y on J. Define the {\\it state} D of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrodinger's equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PD) and D'T = TD respectively, where P is a projection, D and D' are (von Neumann) density matrices, and T is a unitary transformation. We'll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all D's are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we'll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.

  2. The chemical evolution of molecular clouds

    NASA Technical Reports Server (NTRS)

    Iglesias, E.

    1977-01-01

    The nonequilibrium chemistry of dense molecular clouds (10,000 to 1 million hydrogen molecules per cu cm) is studied in the framework of a model that includes the latest published chemical data and most of the recent theoretical advances. In this model the only important external source of ionization is assumed to be high-energy cosmic-ray bombardment; standard charge-transfer reactions are taken into account as well as reactions that transfer charge from molecular ions to trace-metal atoms. Schemes are proposed for the synthesis of such species as NCO, HNCO, and CN. The role played by adsorption and condensation of molecules on the surface of dust grains is investigated, and effects on the chemical evolution of a dense molecular cloud are considered which result from varying the total density or the elemental abundances and from assuming negligible or severe condensation of gaseous species on dust grains. It is shown that the chemical-equilibrium time scale is given approximately by the depletion times of oxygen and nitrogen when the condensation efficiency is negligible; that this time scale is probably in the range from 1 to 4 million years, depending on the elemental composition and initial conditions in the cloud; and that this time scale is insensitive to variations in the total density.

  3. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  4. Microwave inversion of leaf area and inclination angle distributions from backscattered data

    NASA Technical Reports Server (NTRS)

    Lang, R. H.; Saleh, H. A.

    1985-01-01

    The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.

  5. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  6. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  7. Bell's theorem and the problem of decidability between the views of Einstein and Bohr.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.

  8. A procedure for combining acoustically induced and mechanically induced loads (first passage failure design criterion)

    NASA Technical Reports Server (NTRS)

    Crowe, D. R.; Henricks, W.

    1983-01-01

    The combined load statistics are developed by taking the acoustically induced load to be a random population, assumed to be stationary. Each element of this ensemble of acoustically induced loads is assumed to have the same power spectral density (PSD), obtained previously from a random response analysis employing the given acoustic field in the STS cargo bay as a stationary random excitation. The mechanically induced load is treated as either (1) a known deterministic transient, or (2) a nonstationary random variable of known first and second statistical moments which vary with time. A method is then shown for determining the probability that the combined load would, at any time, have a value equal to or less than a certain level. Having obtained a statistical representation of how the acoustic and mechanical loads are expected to combine, an analytical approximation for defining design levels for these loads is presented using the First Passage failure criterion.

  9. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  10. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate origins of downward propagating leaders and a lognormal distribution to generate returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for 10,000 years with an assumed ground flash density and peak current distributions, and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  11. Evaluation of Lightning Incidence to Elements of a Complex Structure: A Monte Carlo Approach

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Rakov, V. A.

    2008-01-01

    There are complex structures for which the installation and positioning of the lightning protection system (LPS) cannot be done using the lightning protection standard guidelines. As a result, there are some "unprotected" or "exposed" areas. In an effort to quantify the lightning threat to these areas, a Monte Carlo statistical tool has been developed. This statistical tool uses two random number generators: a uniform distribution to generate the origin of downward propagating leaders and a lognormal distribution to generate the corresponding returns stroke peak currents. Downward leaders propagate vertically downward and their striking distances are defined by the polarity and peak current. Following the electrogeometrical concept, we assume that the leader attaches to the closest object within its striking distance. The statistical analysis is run for N number of years with an assumed ground flash density and the output of the program is the probability of direct attachment to objects of interest with its corresponding peak current distribution.

  12. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  13. Heuristics can produce surprisingly rational probability estimates: Comment on Costello and Watts (2014).

    PubMed

    Nilsson, Håkan; Juslin, Peter; Winman, Anders

    2016-01-01

    Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). (c) 2015 APA, all rights reserved).

  14. A fully traits-based approach to modeling global vegetation distribution.

    PubMed

    van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M

    2014-09-23

    Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.

  15. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  16. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  17. An observational study of entrainment rate in deep convection

    DOE PAGES

    Guo, Xiaohao; Lu, Chunsong; Zhao, Tianliang; ...

    2015-09-22

    This study estimates entrainment rate and investigates its relationships with cloud properties in 156 deep convective clouds based on in-situ aircraft observations during the TOGA-COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmosphere Response Experiment) field campaign over the western Pacific. To the authors’ knowledge, this is the first study on the probability density function of entrainment rate, the relationships between entrainment rate and cloud microphysics, and the effects of dry air sources on the calculated entrainment rate in deep convection from an observational perspective. Results show that the probability density function of entrainment rate can be well fitted by lognormal,more » gamma or Weibull distribution, with coefficients of determination being 0.82, 0.85 and 0.80, respectively. Entrainment tends to reduce temperature, water vapor content and moist static energy in cloud due to evaporative cooling and dilution. Inspection of the relationships between entrainment rate and microphysical properties reveals a negative correlation between volume-mean radius and entrainment rate, suggesting the potential dominance of homogeneous mechanism in the clouds examined. The entrainment rate and environmental water vapor content show similar tendencies of variation with the distance of the assumed environmental air to the cloud edges. Their variation tendencies are non-monotonic due to the relatively short distance between adjacent clouds.« less

  18. An observational study of entrainment rate in deep convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Xiaohao; Lu, Chunsong; Zhao, Tianliang

    This study estimates entrainment rate and investigates its relationships with cloud properties in 156 deep convective clouds based on in-situ aircraft observations during the TOGA-COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmosphere Response Experiment) field campaign over the western Pacific. To the authors’ knowledge, this is the first study on the probability density function of entrainment rate, the relationships between entrainment rate and cloud microphysics, and the effects of dry air sources on the calculated entrainment rate in deep convection from an observational perspective. Results show that the probability density function of entrainment rate can be well fitted by lognormal,more » gamma or Weibull distribution, with coefficients of determination being 0.82, 0.85 and 0.80, respectively. Entrainment tends to reduce temperature, water vapor content and moist static energy in cloud due to evaporative cooling and dilution. Inspection of the relationships between entrainment rate and microphysical properties reveals a negative correlation between volume-mean radius and entrainment rate, suggesting the potential dominance of homogeneous mechanism in the clouds examined. The entrainment rate and environmental water vapor content show similar tendencies of variation with the distance of the assumed environmental air to the cloud edges. Their variation tendencies are non-monotonic due to the relatively short distance between adjacent clouds.« less

  19. Formulation of a correlated variables methodology for assessment of continuous gas resources with an application to the Woodford play, Arkoma Basin, eastern Oklahoma

    USGS Publications Warehouse

    Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.

    2011-01-01

    Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.

  20. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  1. Age or stage structure? A comparison of dynamic outcomes from discrete age- and stage-structured population models.

    PubMed

    Wikan, Arild

    2012-06-01

    Discrete stage-structured density-dependent and discrete age-structured density-dependent population models are considered. Regarding the former, we prove that the model at hand is permanent (i.e., that the population will neither go extinct nor exhibit explosive oscillations) and given density dependent fecundity terms we also show that species with delayed semelparous life histories tend to be more stable than species which possess precocious semelparous life histories. Moreover, our findings together with results obtained from other stage-structured models seem to illustrate a fairly general ecological principle, namely that iteroparous species are more stable than semelparous species. Our analysis of various age-structured models does not necessarily support the conclusions above. In fact, species with precocious life histories now appear to possess better stability properties than species with delayed life histories, especially in the iteroparous case. We also show that there are dynamical outcomes from semelparous age-structured models which we are not able to capture in corresponding stage-structured cases. Finally, both age- and stage-structured population models may generate periodic dynamics of low period (either exact or approximate). The important prerequisite is to assume density-dependent survival probabilities.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovich, P.; Carter, T. A.; Friedman, B.

    Numerical simulation of plasma turbulence in the Large Plasma Device (LAPD) [W. Gekelman, H. Pfister, Z. Lucky et al., Rev. Sci. Instrum. 62, 2875 (1991)] is presented. The model, implemented in the BOUndary Turbulence code [M. Umansky, X. Xu, B. Dudson et al., Contrib. Plasma Phys. 180, 887 (2009)], includes three-dimensional (3D) collisional fluid equations for plasma density, electron parallel momentum, and current continuity, and also includes the effects of ion-neutral collisions. In nonlinear simulations using measured LAPD density profiles but assuming constant temperature profile for simplicity, self-consistent evolution of instabilities and nonlinearly generated zonal flows results in a saturatedmore » turbulent state. Comparisons of these simulations with measurements in LAPD plasmas reveal good qualitative and reasonable quantitative agreement, in particular in frequency spectrum, spatial correlation, and amplitude probability distribution function of density fluctuations. For comparison with LAPD measurements, the plasma density profile in simulations is maintained either by direct azimuthal averaging on each time step, or by adding particle source/sink function. The inferred source/sink values are consistent with the estimated ionization source and parallel losses in LAPD. These simulations lay the groundwork for more a comprehensive effort to test fluid turbulence simulation against LAPD data.« less

  3. An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution

    NASA Astrophysics Data System (ADS)

    Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan

    2013-04-01

    The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).

  4. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  5. Using SN 1987A light echoes to determine mass loss from the progenitor

    NASA Technical Reports Server (NTRS)

    Crotts, Arlin P. S.; Kunkel, William E.

    1991-01-01

    The hypothesis that the blue progenitor of SN 1987A passed through a blue supergiant phase ending with the expulsion of the outer envelope is tested. The many light echoes seen near SN 1987A were used to search for a mass flow from the progenitor and for abrupt density changes at the limits of this smooth mass flow. The progenitor needed roughly a million yr to create these structures, assuming a constant mass loss at 15 km/s. The dust in the region is small-grained and isotropically scattering. Interaction between the progenitor blue supergiant and red supergiant winds is probably contained within a roughly spherical structure 1.5 pc in diameter.

  6. Shape fabric development in rigid clast populations under pure shear: The influence of no-slip versus slip boundary conditions

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Meere, Patrick A.

    2015-09-01

    Shape fabrics of elliptical objects in rocks are usually assumed to develop by passive behavior of inclusions with respect to the surrounding material leading to shape-based strain analysis methods belonging to the Rf/ϕ family. A probability density function is derived for the orientational characteristics of populations of rigid ellipses deforming in a pure shear 2D deformation with both no-slip and slip boundary conditions. Using maximum likelihood a numerical method is developed for estimating finite strain in natural populations deforming for both mechanisms. Application to a natural example indicates the importance of the slip mechanism in explaining clast shape fabrics in deformed sediments.

  7. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  8. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  9. Free-Free Absorption on Parsec Scales in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Roy, A. L.; Ulvestad, J. S.; Wilson, A. S.; Colbert, E. J. M.; Mundell, C. G.; Wrobel, J. M.; Norris, R. P.; Falcke, H.; Krichbaum, T.

    Seyfert galaxies come in two main types (types 1 and 2) and the difference is probably due to obscuration of the nucleus by a torus of dense molecular material. The inner edge of the torus is expected to be ionized by optical and ultraviolet emission from the active nucleus, and will radiate direct thermal emission (e.g. NGC 1068) and will cause free-free absorption of nuclear radio components viewed through the torus (e.g. Mrk 231, Mrk 348, NGC 2639). However, the nuclear radio sources in Seyfert galaxies are weak compared to radio galaxies and quasars, demanding high sensitivity to study these effects. We have been making sensitive phase referenced VLBI observations at wavelengths between 21 and 2 cm where the free-free turnover is expected, looking for parsec-scale absorption and emission. We find that free-free absorption is common (e.g. in Mrk 348, Mrk 231, NGC 2639, NGC 1068) although compact jets are still visible, and the inferred density of the absorber agrees with the absorption columns inferred from X-ray spectra (Mrk 231, Mrk 348, NGC 2639). We find one-sided parsec-scale jets in Mrk 348 and Mrk 231, and we measure low jet speeds (typically £ 0.1 c). The one-sidedness probably is not due to Doppler boosting, but rather is probably free-free absorption. Plasma density required to produce the absorption is Ne 3 2 105 cm-3 assuming a path length of 0.1 pc, typical of that expected at the inner edge of the obscuring torus.

  10. Desensitization shortens the high-quantal-content endplate current time course in frog muscle with intact cholinesterase.

    PubMed

    Giniatullin, R A; Talantova, M; Vyskocil, F

    1997-08-01

    1. The desensitization induced by bath-applied carbachol or acetylcholine (ACh) and potentiated by proadifen (SKF 525A) was studied in the frog sartorius with intact synaptic acetylcholinesterase (AChE). 2. The reduction in the density and number of postsynaptic receptors produced by desensitization lowered the amplitude of the endplate currents (EPCs) and shortened the EPC decay when the quantal content (m) of the EPC was about 170 and when multiple release of quanta at single active zones was highly probably. The shortening of high-quantal-content EPCs persisted for at least 15 min after the wash-out of agonists, at a time when the amplitude had recovered fully. 3. The decay times of the low-quantal-content EPCs recorded from preparations pretreated with 5 mM Mg2+ (m approximately 70) and single-quantum miniature endplate currents (MEPCs) were not affected by carbachol, ACh or proadifen. 4. The desensitization of ACh receptors potentiated by proadifen, prevented completely the 6- to 8-fold prolongation of EPC which was induced by neostigmine inhibition of synaptic AChE. 5. It is assumed that high-quantal-content EPCs increase the incidence of multiple quanta release at single active zones and the probability of repetitive binding of ACh molecules which leads to EPC prolongation. The shortening which persists after complete recovery of the amplitude during wash-out of the exogenous agonist is probably due to 'trapping' of ACh molecules onto rapidly desensitized receptors and the reduced density of functional AChRs during the quantum action.

  11. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.

  12. A poisson process model for hip fracture risk.

    PubMed

    Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S

    2010-08-01

    The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.

  13. Effects of Mean Flow Profiles on Instability of a Low-Density Gas Jet Injected into a High-Density Gas

    NASA Technical Reports Server (NTRS)

    Vedantam, Nanda Kishore

    2003-01-01

    The objective of this study was to investigate the effects of the mean flow profiles on the instability characteristics in the near-injector region of low-density gas jets injected into high-density ambient gas mediums. To achieve this, a linear temporal stability analysis and a spatio-temporal stability analysis of a low-density round gas jet injected vertically upwards into a high-density ambient gas were performed by assuming three different sets of mean velocity and density profiles. The flow was assumed to be isothermal and locally parallel. Viscous and diffusive effects were ignored. The mean flow parameters were represented as the sum of the mean value and a small normal-mode fluctuation. A second order differential equation governing the pressure disturbance amplitude was derived from the basic conservation equations. The first set of mean velocity and density profiles assumed were those used by Monkewitz and Sohn for investigating absolute instability in hot jets. The second set of velocity and density profiles assumed for this study were the ones used by Lawson. And the third set of mean profiles included a parabolic velocity profile and a hyperbolic tangent density profile. The effects of the inhomogeneous shear layer and the Froude number (signifying the effects of gravity) on the temporal and spatio-temporal results for each set of mean profiles were delineated. Additional information is included in the original extended abstract.

  14. A least squares approach to estimating the probability distribution of unobserved data in multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Salama, Paul

    2008-02-01

    Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.

  15. High-energy Electron Scattering and the Charge Distributions of Selected Nuclei

    DOE R&D Accomplishments Database

    Hahn, B.; Ravenhall, D. G.; Hofstadter, R.

    1955-10-01

    Experimental results are presented of electron scattering by Ca, V, Co, In, Sb, Hf, Ta, W, Au, Bi, Th, and U, at 183 Mev and (for some of the elements) at 153 Mev. For those nuclei for which asphericity and inelastic scattering are absent or unimportant, i.e., Ca, V, Co, In, Sb, Au, and Bi, a partial wave analysis of the Dirac equation has been performed in which the nuclei are represented by static, spherically symmetric charge distributions. Smoothed uniform charge distributions have been assumed; these are characterized by a constant charge density in the central region of the nucleus, with a smoothed-our surface. Essentially two parameters can be determined, related to the radium and to the surface thickness. An examination of the Au experiments show that the functional forms of the surface are not important, and that the charge density in the central regions is probably fairly flat, although it cannot be determined very accurately.

  16. Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Ghosn, L. J.

    2007-01-01

    A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.

  17. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  18. Variational Bayesian Inversion of Quasi-Localized Seismic Attributes for the Spatial Distribution of Geological Facies

    NASA Astrophysics Data System (ADS)

    Nawaz, Muhammad Atif; Curtis, Andrew

    2018-04-01

    We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.

  19. On the abundance of extraterrestrial life after the Kepler mission

    NASA Astrophysics Data System (ADS)

    Wandel, Amri

    2015-07-01

    The data recently accumulated by the Kepler mission have demonstrated that small planets are quite common and that a significant fraction of all stars may have an Earth-like planet within their habitable zone. These results are combined with a Drake-equation formalism to derive the space density of biotic planets as a function of the relatively modest uncertainty in the astronomical data and of the (yet unknown) probability for the evolution of biotic life, F b. I suggest that F b may be estimated by future spectral observations of exoplanet biomarkers. If F b is in the range 0.001-1, then a biotic planet may be expected within 10-100 light years from Earth. Extending the biotic results to advanced life I derive expressions for the distance to putative civilizations in terms of two additional Drake parameters - the probability for evolution of a civilization, F c, and its average longevity. For instance, assuming optimistic probability values (F b~F c~1) and a broadcasting longevity of a few thousand years, the likely distance to the nearest civilizations detectable by searching for intelligent electromagnetic signals is of the order of a few thousand light years. The probability of detecting intelligent signals with present and future radio telescopes is calculated as a function of the Drake parameters. Finally, I describe how the detection of intelligent signals would constrain the Drake parameters.

  20. Noise reduction in heat-assisted magnetic recording of bit-patterned media by optimizing a high/low Tc bilayer structure

    NASA Astrophysics Data System (ADS)

    Muthsam, O.; Vogler, C.; Suess, D.

    2017-12-01

    It is assumed that heat-assisted magnetic recording is the recording technique of the future. For pure hard magnetic grains in high density media with an average diameter of 5 nm and a height of 10 nm, the switching probability is not sufficiently high for the use in bit-patterned media. Using a bilayer structure with 50% hard magnetic material with low Curie temperature and 50% soft magnetic material with high Curie temperature to obtain more than 99.2% switching probability leads to very large jitter. We propose an optimized material composition to reach a switching probability of Pswitch > 99.2% and simultaneously achieve the narrow transition jitter of pure hard magnetic material. Simulations with a continuous laser spot were performed with the atomistic simulation program VAMPIRE for a single cylindrical recording grain with a diameter of 5 nm and a height of 10 nm. Different configurations of soft magnetic material and different amounts of hard and soft magnetic material were tested and discussed. Within our analysis, a composition with 20% soft magnetic and 80% hard magnetic material reaches the best results with a switching probability Pswitch > 99.2%, an off-track jitter parameter σoff,80/20 = 0.46 nm and a down-track jitter parameter σdown,80/20 = 0.49 nm.

  1. The Adaptation of the Moth Pheromone Receptor Neuron to its Natural Stimulus

    NASA Astrophysics Data System (ADS)

    Kostal, Lubomir; Lansky, Petr; Rospars, Jean-Pierre

    2008-07-01

    We analyze the first phase of information transduction in the model of the olfactory receptor neuron of the male moth Antheraea polyphemus. We predict such stimulus characteristics that enable the system to perform optimally, i.e., to transfer as much information as possible. Few a priori constraints on the nature of stimulus and stimulus-to-signal transduction are assumed. The results are given in terms of stimulus distributions and intermittency factors which makes direct comparison with experimental data possible. Optimal stimulus is approximatelly described by exponential or log-normal probability density function which is in agreement with experiment and the predicted intermittency factors fall within the lowest range of observed values. The results are discussed with respect to electroantennogram measurements and behavioral observations.

  2. MAI statistics estimation and analysis in a DS-CDMA system

    NASA Astrophysics Data System (ADS)

    Alami Hassani, A.; Zouak, M.; Mrabti, M.; Abdi, F.

    2018-05-01

    A primary limitation of Direct Sequence Code Division Multiple Access DS-CDMA link performance and system capacity is multiple access interference (MAI). To examine the performance of CDMA systems in the presence of MAI, i.e., in a multiuser environment, several works assumed that the interference can be approximated by a Gaussian random variable. In this paper, we first develop a new and simple approach to characterize the MAI in a multiuser system. In addition to statistically quantifying the MAI power, the paper also proposes a statistical model for both variance and mean of the MAI for synchronous and asynchronous CDMA transmission. We show that the MAI probability density function (PDF) is Gaussian for the equal-received-energy case and validate it by computer simulations.

  3. The Interaction Between the Magnetosphere of Mars and that of Comet Siding Spring

    NASA Astrophysics Data System (ADS)

    Holmstrom, M.; Futaana, Y.; Barabash, S. V.

    2015-12-01

    On 19 October 2014 the comet Siding Spring flew by Mars. This was a unique opportunity to study the interaction between a cometary and a planetary magnetosphere. Here we model the magnetosphere of the comet using a hybrid plasma solver (ions as particles, electrons as a fluid). The undisturbed upstream solar wind ion conditions are estimated from observations by ASPERA-3/IMA on Mars Express during several orbits. It is found that Mars probably passed through a solar wind that was disturbed by the comet during the flyby. The uncertainty derives from that the size of the disturbed solar wind region in the comet simulation is sensitive to the assumed upstream solar wind conditions, especially the solar wind proton density.

  4. Fixation Probability in a Haploid-Diploid Population

    PubMed Central

    Bessho, Kazuhiro; Otto, Sarah P.

    2017-01-01

    Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright–Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. PMID:27866168

  5. Probability of detection of nests and implications for survey design

    USGS Publications Warehouse

    Smith, P.A.; Bart, J.; Lanctot, Richard B.; McCaffery, B.J.; Brown, S.

    2009-01-01

    Surveys based on double sampling include a correction for the probability of detection by assuming complete enumeration of birds in an intensively surveyed subsample of plots. To evaluate this assumption, we calculated the probability of detecting active shorebird nests by using information from observers who searched the same plots independently. Our results demonstrate that this probability varies substantially by species and stage of the nesting cycle but less by site or density of nests. Among the species we studied, the estimated single-visit probability of nest detection during the incubation period varied from 0.21 for the White-rumped Sandpiper (Calidris fuscicollis), the most difficult species to detect, to 0.64 for the Western Sandpiper (Calidris mauri), the most easily detected species, with a mean across species of 0.46. We used these detection probabilities to predict the fraction of persistent nests found over repeated nest searches. For a species with the mean value for detectability, the detection rate exceeded 0.85 after four visits. This level of nest detection was exceeded in only three visits for the Western Sandpiper, but six to nine visits were required for the White-rumped Sandpiper, depending on the type of survey employed. Our results suggest that the double-sampling method's requirement of nearly complete counts of birds in the intensively surveyed plots is likely to be met for birds with nests that survive over several visits of nest searching. Individuals with nests that fail quickly or individuals that do not breed can be detected with high probability only if territorial behavior is used to identify likely nesting pairs. ?? The Cooper Ornithological Society, 2009.

  6. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  7. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  8. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  9. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  10. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  11. Impact of stratospheric aircraft on calculations of nitric acid trihydrate cloud surface area densities using NMC temperatures and 2D model constituent distributions

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Douglass, Anne R.

    1994-01-01

    A parameterization of NAT (nitric acid trihydrate) clouds is developed for use in 2D models of the stratosphere. The parameterization uses model distributions of HNO3 and H2O to determine critical temperatures for NAT formation as a function of latitude and pressure. National Meteorological Center temperature fields are then used to determine monthly temperature frequency distributions, also as a function of latitude and pressure. The fractions of these distributions which fall below the critical temperatures for NAT formation are then used to determine the NAT cloud surface area density for each location in the model grid. By specifying heterogeneous reaction rates as functions of the surface area density, it is then possible to assess the effects of the NAT clouds on model constituent distributions. We also consider the increase in the NAT cloud formation in the presence of a fleet of stratospheric aircraft. The stratospheric aircraft NO(x) and H2O perturbations result in increased HNO3 as well as H2O. This increases the probability of NAT formation substantially, especially if it is assumed that the aircraft perturbations are confined to a corridor region.

  12. Information theory lateral density distribution for Earth inferred from global gravity field

    NASA Technical Reports Server (NTRS)

    Rubincam, D. P.

    1981-01-01

    Information Theory Inference, better known as the Maximum Entropy Method, was used to infer the lateral density distribution inside the Earth. The approach assumed that the Earth consists of indistinguishable Maxwell-Boltzmann particles populating infinitesimal volume elements, and followed the standard methods of statistical mechanics (maximizing the entropy function). The GEM 10B spherical harmonic gravity field coefficients, complete to degree and order 36, were used as constraints on the lateral density distribution. The spherically symmetric part of the density distribution was assumed to be known. The lateral density variation was assumed to be small compared to the spherically symmetric part. The resulting information theory density distribution for the cases of no crust removed, 30 km of compensated crust removed, and 30 km of uncompensated crust removed all gave broad density anomalies extending deep into the mantle, but with the density contrasts being the greatest towards the surface (typically + or 0.004 g cm 3 in the first two cases and + or - 0.04 g cm 3 in the third). None of the density distributions resemble classical organized convection cells. The information theory approach may have use in choosing Standard Earth Models, but, the inclusion of seismic data into the approach appears difficult.

  13. Contagious seed dispersal beneath heterospecific fruiting trees and its consequences.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwit, Charles; Levey, Douglas, J.; Greenberg, Cathyrn, H.

    2004-05-03

    Kwit, Charles, D.J. Levey and Cathryn H. Greenberg. 2004. Contagious seed dispersal beneath heterospecific fruiting trees and its consequences. Oikos. 107:303-308 A n hypothesized advantage of seed dispersal is avoidance of high per capita mortality (i.e. density-dependent mortality) associated with dense populations of seeds and seedlings beneath parent trees. This hypothesis, inherent in nearly all seed dispersal studies, assumes that density effects are species-specific. Yet because many tree species exhibit overlapping fruiting phenologies and share dispersers, seeds may be deposited preferentially under synchronously fruiting heterospecific trees, another location where they may be particularly vulnerable to mortality, in this case bymore » generalist seed predators. We demonstrate that frugivores disperse higher densities of Cornus florida seeds under fruiting (female) I lex opaca trees than under non-fruiting (male) I lex trees in temperate hardwood forest settings in South Carolina, U SA . To determine if density of Cornus and/or I lex seeds influences survivorship of dispersed Cornus seeds, we followed the fates of experimentally dispersed Cornus seeds in neighborhoods of differing, manipulated background densities of Cornus and I lex seeds. We found that the probability of predation on dispersed Cornus seeds was a function of both Cornus and I lex background seed densities. H igher densities of I lex seeds negatively affected Cornus seed survivorship, and this was particularly evident as background densities of dispersed Cornus seeds increased. These results illustrate the importance of viewing seed dispersal and predation in a community context, as the pattern and intensity of density-dependent mortality may not be solely a function of conspecific densities.« less

  14. Spectral dimension controlling the decay of the quantum first-detection probability

    NASA Astrophysics Data System (ADS)

    Thiel, Felix; Kessler, David A.; Barkai, Eli

    2018-06-01

    We consider a quantum system that is initially localized at xin and that is repeatedly projectively probed with a fixed period τ at position xd. We ask for the probability Fn that the system is detected at xd for the very first time, where n is the number of detection attempts. We relate the asymptotic decay and oscillations of Fn with the system's energy spectrum, which is assumed to be absolutely continuous. In particular, Fn is determined by the Hamiltonian's measurement spectral density of states (MSDOS) f (E ) that is closely related to the density of energy states (DOS). We find that Fn decays like a power law whose exponent is determined by the power-law exponent dS of f (E ) around its singularities E*. Our findings are analogous to the classical first passage theory of random walks. In contrast to the classical case, the decay of Fn is accompanied by oscillations with frequencies that are determined by the singularities E*. This gives rise to critical detection periods τc at which the oscillations disappear. In the ordinary case dS can be identified with the spectral dimension associated with the DOS. Furthermore, the singularities E* are the van Hove singularities of the DOS in this case. We find that the asymptotic statistics of Fn depend crucially on the initial and detection state and can be wildly different for out-of-the-ordinary states, which is in sharp contrast to the classical theory. The properties of the first-detection probabilities can alternatively be derived from the transition amplitudes. All our results are confirmed by numerical simulations of the tight-binding model, and of a free particle in continuous space both with a normal and with an anomalous dispersion relation. We provide explicit asymptotic formulas for the first-detection probability in these models.

  15. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  16. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  17. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  18. Anomalous sea surface structures as an object of statistical topography

    NASA Astrophysics Data System (ADS)

    Klyatskin, V. I.; Koshel, K. V.

    2015-06-01

    By exploiting ideas of statistical topography, we analyze the stochastic boundary problem of emergence of anomalous high structures on the sea surface. The kinematic boundary condition on the sea surface is assumed to be a closed stochastic quasilinear equation. Applying the stochastic Liouville equation, and presuming the stochastic nature of a given hydrodynamic velocity field within the diffusion approximation, we derive an equation for a spatially single-point, simultaneous joint probability density of the surface elevation field and its gradient. An important feature of the model is that it accounts for stochastic bottom irregularities as one, but not a single, perturbation. Hence, we address the assumption of the infinitely deep ocean to obtain statistic features of the surface elevation field and the squared elevation gradient field. According to the calculations, we show that clustering in the absolute surface elevation gradient field happens with the unit probability. It results in the emergence of rare events such as anomalous high structures and deep gaps on the sea surface almost in every realization of a stochastic velocity field.

  19. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  20. Earthquake Potential Models for China

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Jackson, D. D.

    2002-12-01

    We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.

  1. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  2. Fixation Probability in a Haploid-Diploid Population.

    PubMed

    Bessho, Kazuhiro; Otto, Sarah P

    2017-01-01

    Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright-Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. Copyright © 2017 by the Genetics Society of America.

  3. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  4. A compound scattering pdf for the ultrasonic echo envelope and its relationship to K and Nakagami distributions.

    PubMed

    Shankar, P Mohana

    2003-03-01

    A compound probability density function (pdf) is presented to describe the envelope of the backscattered echo from tissue. This pdf allows local and global variation in scattering cross sections in tissue. The ultrasonic backscattering cross sections are assumed to be gamma distributed. The gamma distribution also is used to model the randomness in the average cross sections. This gamma-gamma model results in the compound scattering pdf for the envelope. The relationship of this compound pdf to the Rayleigh, K, and Nakagami distributions is explored through an analysis of the signal-to-noise ratio of the envelopes and random number simulations. The three parameter compound pdf appears to be flexible enough to represent envelope statistics giving rise to Rayleigh, K, and Nakagami distributions.

  5. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  6. Factors controlling the structures of magma chambers in basaltic volcanoes

    NASA Technical Reports Server (NTRS)

    Wilson, L.; Head, James W.

    1991-01-01

    The depths, vertical extents, and lateral extents of magma chambers and their formation are discussed. The depth to the center of a magma chamber is most probably determined by the density structure of the lithosphere; this process is explained. It is commonly assumed that magma chambers grow until the stress on the roof, floor, and side-wall boundaries exceed the strength of the wall rocks. Attempts to grow further lead to dike propagation events which reduce the stresses below the critical values of rock failure. The tensile or compressive failure of the walls is discussed with respect to magma migration. The later growth of magma chambers is accomplished by lateral dike injection into the country rocks. The factors controlling the patterns of growth and cooling of such dikes are briefly mentioned.

  7. A description of discrete internal representation schemes for visual pattern discrimination.

    PubMed

    Foster, D H

    1980-01-01

    A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.

  8. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  9. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Goldstein, D.; Magnotti, F.; Chinitz, W.

    1983-01-01

    Reaction rates in turbulent, reacting flows are reviewed. Assumed probability density functions (pdf) modeling of reaction rates is being investigated in relation to a three variable pdf employing a 'most likely pdf' model. Chemical kinetic mechanisms treating hydrogen air combustion is studied. Perfectly stirred reactor modeling of flame stabilizing recirculation regions was used to investigate the stable flame regions for silane, hydrogen, methane, and propane, and for certain mixtures thereof. It is concluded that in general, silane can be counted upon to stabilize flames only when the overall fuel air ratio is close to or greater than unity. For lean flames, silane may tend to destabilize the flame. Other factors favoring stable flames are high initial reactant temperatures and system pressure.

  10. Theoretical size distribution of fossil taxa: analysis of a null model.

    PubMed

    Reed, William J; Hughes, Barry D

    2007-03-22

    This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  11. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  12. Using Expected Value to Introduce the Laplace Transform

    ERIC Educational Resources Information Center

    Lutzer, Carl V.

    2015-01-01

    We propose an introduction to the Laplace transform in which Riemann sums are used to approximate the expected net change in a function, assuming that it quantifies a process that can terminate at random. We assume only a basic understanding of probability.

  13. Excel, Earthquakes, and Moneyball: exploring Cascadia earthquake probabilities using spreadsheets and baseball analogies

    NASA Astrophysics Data System (ADS)

    Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2017-12-01

    Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of getting a hit is N%" or "the probability of an earthquake is N%" involves specifying the assumptions made. Different plausible assumptions yield a wide range of estimates. In both seismology and sports, how to better predict future performance remains an important question.

  14. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  15. Geographical, Ethnic and Socio-Economic Differences in Utilization of Obstetric Care in the Netherlands.

    PubMed

    Posthumus, Anke G; Borsboom, Gerard J; Poeran, Jashvant; Steegers, Eric A P; Bonsel, Gouke J

    2016-01-01

    All women in the Netherlands should have equal access to obstetric care. However, utilization of care is shaped by demand and supply factors. Demand is increased in high risk groups (non-Western women, low socio-economic status (SES)), and supply is influenced by availability of hospital facilities (hospital density). To explore the dynamics of obstetric care utilization we investigated the joint association of hospital density and individual characteristics with prototype obstetric interventions. A logistic multi-level model was fitted on retrospective data from the Netherlands Perinatal Registry (years 2000-2008, 1.532.441 singleton pregnancies). In this analysis, the first level comprised individual maternal characteristics, the second of neighbourhood SES and hospital density. The four outcome variables were: referral during pregnancy, elective caesarean section (term and post-term breech pregnancies), induction of labour (term and post-term pregnancies), and birth setting in assumed low-risk pregnancies. Higher hospital density is not associated with more obstetric interventions. Adjusted for maternal characteristics and hospital density, living in low SES neighbourhoods, and non-Western ethnicity were generally associated with a lower probability of interventions. For example, non-Western women had considerably lower odds for induction of labour in all geographical areas, with strongest effects in the more rural areas (non-Western women: OR 0.78, 95% CI 0.77-0.80, p<0.001). Our results suggest inequalities in obstetric care utilization in the Netherlands, and more specifically a relative underservice to the deprived, independent of level of supply.

  16. The risk of pedestrian collisions with peripheral visual field loss.

    PubMed

    Peli, Eli; Apfelbaum, Henry; Berson, Eliot L; Goldstein, Robert B

    2016-12-01

    Patients with peripheral field loss complain of colliding with other pedestrians in open-space environments such as shopping malls. Field expansion devices (e.g., prisms) can create artificial peripheral islands of vision. We investigated the visual angle at which these islands can be most effective for avoiding pedestrian collisions, by modeling the collision risk density as a function of bearing angle of pedestrians relative to the patient. Pedestrians at all possible locations were assumed to be moving in all directions with equal probability within a reasonable range of walking speeds. The risk density was found to be highly anisotropic. It peaked at ≈45° eccentricity. Increasing pedestrian speed range shifted the risk to higher eccentricities. The risk density is independent of time to collision. The model results were compared to the binocular residual peripheral island locations of 42 patients with forms of retinitis pigmentosa. The natural residual island prevalence also peaked nasally at about 45° but temporally at about 75°. This asymmetry resulted in a complementary coverage of the binocular field of view. Natural residual binocular island eccentricities seem well matched to the collision-risk density function, optimizing detection of other walking pedestrians (nasally) and of faster hazards (temporally). Field expansion prism devices will be most effective if they can create artificial peripheral islands at about 45° eccentricities. The collision risk and residual island findings raise interesting questions about normal visual development.

  17. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    PubMed

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP effects (SNP-BLUP model). When reducing marker density from WGS data to 30 K, SNP-BLUP tended to yield the highest accuracies, at least in the short term. Based on SVD of the genotype matrix, we developed a direct method for the calculation of BayesC estimates of marker effects. Although SVD- and MCMC-based marker effects differed slightly, their prediction accuracies were similar. Assuming that the SVD of the marker genotype matrix is already performed for other reasons (e.g. for SNP-BLUP), computation times for the BayesC predictions were comparable to those of SNP-BLUP.

  18. A survey of carbon monoxide emission in dark clouds. [cosmic dust

    NASA Technical Reports Server (NTRS)

    Dickman, R. L.

    1975-01-01

    Results are reported of a CO and (C-13)O survey of 68 dark clouds from the Lynds catalog. CO was detected in 63 of the 64 sources in which it was searched for, and the (C-13)O line was seen in 52 of 55 clouds. There is a rather narrow distribution of CO peak line radiation temperatures about a mean of 6 K; this may reflect the presence of a roughly uniform kinetic temperature of 9.5 K in the sources. Despite the probably subthermal excitation temperature of the (C-13)O transition observed, derived (C-13)O column densities are most likely good to within a factor of 2. Typical CO column densities for the clouds surveyed are 5 x 10 to the 17-th power per sq cm, assuming a terrestrial carbon isotope ratio. All 68 clouds have previously been studied by Dieter in 6-cm H2CO absorption; a comparison of line widths shows the (C-13)O lines to generally be wider than their formaldehyde counterparts. Possible explanations of this fact in terms of internal cloud motions are discussed.

  19. Possible formation pathways for the low-density Neptune-mass planet HAT-P-26b

    NASA Astrophysics Data System (ADS)

    Ali-Dib, Mohamad; Lakhlani, Gunjan

    2018-01-01

    We investigate possible pathways for the formation of the low-density Neptune-mass planet HAT-P-26b. We use two different formation models based on pebble and planetesimal accretion, and includes gas accretion, disc migration and simple photoevaporation. The models track the atmospheric oxygen abundance, in addition to the orbital period, and mass of the forming planets, which we compare to HAT-P-26b. We find that pebble accretion can explain this planet more naturally than planetesimal accretion that fails completely unless we artificially enhance the disc metallicity significantly. Pebble accretion models can reproduce HAT-P-26b with either a high initial core mass and low amount of envelope enrichment through core erosion or pebbles dissolution, or the opposite, with both scenarios being possible. Assuming a low envelope enrichment factor as expected from convection theory and comparable to the values we can infer from the D/H measurements in Uranus and Neptune, our most probable formation pathway for HAT-P-26b is through pebble accretion starting around 10 au early in the disc's lifetime.

  20. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  1. Probabilistic Cloning of Three Real States with Optimal Success Probabilities

    NASA Astrophysics Data System (ADS)

    Rui, Pin-shu

    2017-06-01

    We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.

  2. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  3. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.

    1993-01-01

    The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.

  4. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  5. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  6. Coagulation of grains in static and collapsing protostellar clouds

    NASA Technical Reports Server (NTRS)

    Weidenschilling, S. J.; Ruzmaikina, T. V.

    1994-01-01

    We simulate collisional evolution of grains in dense turbulent molecular cloud cores (or Bok globules) in static equilibrium and free-fall collapse, assuming spherical symmetry. Relative velocities are due to thermal motions, differential settling, and turbulence, with the latter dominant for sonic turbulence with an assumed Kolmogorov spectrum. Realistic criteria are used to determine outcomes of collisions (coagulation vs. destruction) as functions of particle size and velocity. Results are presented for a variety of cloud parameters (radial density profile, turbulent velocity) and particle properties (density, impact strength). Results are sensitive to the assumed mechanical properties (density and impact strength) of grain aggregates. Particle growth is enhanced if aggregates have low density or fractal structures. On a timescale of a few Myr, an initial population of 0.1 micrometers grains may produce dense compact particles approximately 1 micrometer in size, or fluffy aggregates approximately 100 micrometers. For impact strengths less than or equal to 10(exp 6) ergs/g, a steady state is reached between coagulation of small grains and collisional disruption of larger aggregates. Formation of macroscopic aggregates requires high mechanical strengths and low aggregate densities. We assume sonic turbulence during collapse, with varied eddy size scales determining the dissipation rate or turbulence strength. The degree of collisional evolution during collapse is sensitive to the assumed small-scale structure (inner sc ale) of the turbulence. Weak turbulence results in few collisions and preserves the precollapse particle size distribution with little change. Strong turbulence tends to produce net destruction, rather than particle growth, during infall, unless inpact strengths are greater than 10(exp 6)ergs/g.

  7. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    PubMed

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.

  8. A statistical treatment of bioassay pour fractions

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack; Hughes, David

    A bioassay is a method for estimating the number of bacterial spores on a spacecraft surface for the purpose of demonstrating compliance with planetary protection (PP) requirements (Ref. 1). The details of the process may be seen in the appropriate PP document (e.g., for NASA, Ref. 2). In general, the surface is mechanically sampled with a damp sterile swab or wipe. The completion of the process is colony formation in a growth medium in a plate (Petri dish); the colonies are counted. Consider a set of samples from randomly selected, known areas of one spacecraft surface, for simplicity. One may calculate the mean and standard deviation of the bioburden density, which is the ratio of counts to area sampled. The standard deviation represents an estimate of the variation from place to place of the true bioburden density commingled with the precision of the individual sample counts. The accuracy of individual sample results depends on the equipment used, the collection method, and the culturing method. One aspect that greatly influences the result is the pour fraction, which is the quantity of fluid added to the plates divided by the total fluid used in extracting spores from the sampling equipment. In an analysis of a single sample’s counts due to the pour fraction, one seeks to answer the question: What is the probability that if a certain number of spores are counted with a known pour fraction, that there are an additional number of spores in the part of the rinse not poured. This is given for specific values by the binomial distribution density, where detection (of culturable spores) is success and the probability of success is the pour fraction. A special summation over the binomial distribution, equivalent to adding for all possible values of the true total number of spores, is performed. This distribution when normalized will almost yield the desired quantity. It is the probability that the additional number of spores does not exceed a certain value. Of course, for a desired value of uncertainty, one must invert the calculation. However, this probability of finding exactly the number of spores in the poured part is correct only in the case where all values of the true number of spores greater than or equal to the adjusted count are equally probable. This is not realistic, of course, but the result can only overestimate the uncertainty. So it is useful. In probability speak, one has the conditional probability given any true total number of spores. Therefore one must multiply it by the probability of each possible true count, before the summation. If the counts for a sample set (of which this is one sample) are available, one may use the calculated variance and the normal probability distribution. In this approach, one assumes a normal distribution and neglects the contribution from spatial variation. The former is a common assumption. The latter can only add to the conservatism (over estimate the number of spores at some level of confidence). A more straightforward approach is to assume a Poisson probability distribution for the measured total sample set counts, and use the product of the number of samples and the mean number of counts per sample as the mean of the Poisson distribution. It is necessary to set the total count to 1 in the Poisson distribution when actual total count is zero. Finally, even when the planetary protection requirements for spore burden refer only to the mean values, they require an adjustment for pour fraction and method efficiency (a PP specification based on independent data). The adjusted mean values are a 50/50 proposition (e.g., the probability of the true total counts in the sample set exceeding the estimate is 0.50). However, this is highly unconservative when the total counts are zero. No adjustment to the mean values occurs for either pour fraction or efficiency. The recommended approach is once again to set the total counts to 1, but now applied to the mean values. Then one may apply the corrections to the revised counts. It can be shown by the methods developed in this work that this change is usually conservative enough to increase the level of confidence in the estimate to 0.5. 1. NASA. (2005) Planetary protection provisions for robotic extraterrestrial missions. NPR 8020.12C, April 2005, National Aeronautics and Space Administration, Washington, DC. 2. NASA. (2010) Handbook for the Microbiological Examination of Space Hardware, NASA-HDBK-6022, National Aeronautics and Space Administration, Washington, DC.

  9. Theoretical size distribution of fossil taxa: analysis of a null model

    PubMed Central

    Reed, William J; Hughes, Barry D

    2007-01-01

    Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249

  10. Deductibles in health insurance

    NASA Astrophysics Data System (ADS)

    Dimitriyadis, I.; Öney, Ü. N.

    2009-11-01

    This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.

  11. A Goal Programming/Constrained Regression Review of the Bell System Breakup.

    DTIC Science & Technology

    1985-05-01

    characteristically employ. 4 .- - -. . ,. - - ;--.. . . .. 2. MULTI-PRODUCT COST MODEL AND DATA DETAILS When technical efficiency (i.e. zero waste ) can be assumed...assumming, but we believe that it was probably technical (= zero waste ) efficiency by virtue of the following reasons. Scale efficien- cy was a

  12. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  13. Variations in Ionospheric Peak Electron Density During Sudden Stratospheric Warmings in the Arctic Region

    NASA Astrophysics Data System (ADS)

    Yasyukevich, A. S.

    2018-04-01

    The focus of the paper is the ionospheric disturbances during sudden stratospheric warming (SSW) events in the Arctic region. This study examines the ionospheric behavior during 12 SSW events, which occurred in the Northern Hemisphere over 2006-2013, based on vertical sounding data from DPS-4 ionosonde located in Norilsk (88.0°E, 69.2°N). Most of the addressed events show that despite generally quiet geomagnetic conditions, notable changes in the ionospheric behavior are observed during SSWs. During the SSW evolution and peak phases, there is a daytime decrease in NmF2 values at 10-20% relative to background level. After the SSW maxima, in contrast, midday NmF2 surpasses the average monthly values for 10-20 days. These changes in the electron density are observed for both strong and weak stratospheric warmings occurring at midwinter. The revealed SSW effects in the polar ionosphere are assumed to be associated with changes in the thermospheric neutral composition, affecting the F2-layer electron density. Analysis of the Global Ultraviolet Imager data revealed the positive variations in the O/N2 ratio within the thermosphere during SSW peak and recovery periods. Probable mechanisms for SSW impact on the state of the high-latitude neutral thermosphere and ionosphere are discussed.

  14. Constraints on Average Radial Anisotropy in the Lower Mantle

    NASA Astrophysics Data System (ADS)

    Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.

    2014-12-01

    Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.

  15. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    NASA Technical Reports Server (NTRS)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  16. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  17. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  18. From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience.

    PubMed

    Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded

    2017-07-01

    Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Climate change and the detection of trends in annual runoff

    USGS Publications Warehouse

    McCabe, G.J.; Wolock, D.M.

    1997-01-01

    This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.

  20. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  1. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  2. Comparing the ISO-recommended and the cumulative data-reduction algorithms in S-on-1 laser damage test by a reverse approach method

    NASA Astrophysics Data System (ADS)

    Zorila, Alexandru; Stratan, Aurel; Nemes, George

    2018-01-01

    We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.

  3. Joint Analysis of GOCE Gravity Gradients Data with Seismological and Geodynamic Observations to Infer Mantle Properties

    NASA Astrophysics Data System (ADS)

    Metivier, L.; Greff-Lefftz, M.; Panet, I.; Pajot-Métivier, G.; Caron, L.

    2014-12-01

    Joint inversion of the observed geoid and seismic velocities has been commonly used to constrain the viscosity profile within the mantle as well as the lateral density variations. Recent satellite measurements of the second-order derivatives of the Earth's gravity potential give new possibilities to understand these mantle properties. We use lateral density variations in the Earth's mantle based on slab history or deduced from seismic tomography. The main uncertainties are the relationship between seismic velocity and density -the so-called density/velocity scaling factor- and the variation with depth of the density contrast between the cold slabs and the surrounding mantle, introduced here as a scaling factor with respect to a constant value. The geoid, gravity and gravity gradients at the altitude of the GOCE satellite (about 255 km) are derived using geoid kernels for given viscosity depth profiles. We assume a layered mantle model with viscosity and conversion factor constant in each layer, and we fix the viscosity of the lithosphere. We perform a Monte Carlo search for the viscosity and the density/velocity scaling factor profiles within the mantle which allow to fit the observed geoid, gravity and gradients of gravity. We test a 2-layer, a 3-layer and 4-layer mantle. For each model, we compute the posterior probability distribution of the unknown parameters, and we discuss the respective contributions of the geoid, gravity and gravity gradients in the inversion. Finally, for the best fit, we present the viscosity and scaling factor profiles obtained for the lateral density variations derived from seismic velocities and for slabs sinking into the mantle.

  4. Exposure time of oral rabies vaccine baits relative to baiting density and raccoon population density.

    PubMed

    Blackwell, Bradley F; Seamans, Thomas W; White, Randolph J; Patton, Zachary J; Bush, Rachel M; Cepek, Jonathan D

    2004-04-01

    Oral rabies vaccination (ORV) baiting programs for control of raccoon (Procyon lotor) rabies in the USA have been conducted or are in progress in eight states east of the Mississippi River. However, data specific to the relationship between raccoon population density and the minimum density of baits necessary to significantly elevate rabies immunity are few. We used the 22-km2 US National Aeronautics and Space Administration Plum Brook Station (PBS) in Erie County, Ohio, USA, to evaluate the period of exposure for placebo vaccine baits placed at a density of 75 baits/km2 relative to raccoon population density. Our objectives were to 1) estimate raccoon population density within the fragmented forest, old-field, and industrial landscape at PBS: and 2) quantify the time that placebo, Merial RABORAL V-RG vaccine baits were available to raccoons. From August through November 2002 we surveyed raccoon use of PBS along 19.3 km of paved-road transects by using a forward-looking infrared camera mounted inside a vehicle. We used Distance 3.5 software to calculate a probability of detection function by which we estimated raccoon population density from transect data. Estimated population density on PBS decreased from August (33.4 raccoons/km2) through November (13.6 raccoons/km2), yielding a monthly mean of 24.5 raccoons/km2. We also quantified exposure time for ORV baits placed by hand on five 1-km2 grids on PBS from September through October. An average 82.7% (SD = 4.6) of baits were removed within 1 wk of placement. Given raccoon population density, estimates of bait removal and sachet condition, and assuming 22.9% nontarget take, the baiting density of 75/ km2 yielded an average of 3.3 baits consumed per raccoon and the sachet perforated.

  5. Applying geographic profiling used in the field of criminology for predicting the nest locations of bumble bees.

    PubMed

    Suzuki-Ohno, Yukari; Inoue, Maki N; Ohno, Kazunori

    2010-07-21

    We tested whether geographic profiling (GP) can predict multiple nest locations of bumble bees. GP was originally developed in the field of criminology for predicting the area where an offender most likely resides on the basis of the actual crime sites and the predefined probability of crime interaction. The predefined probability of crime interaction in the GP model depends on the distance of a site from an offender's residence. We applied GP for predicting nest locations, assuming that foraging and nest sites were the crime sites and the offenders' residences, respectively. We identified the foraging and nest sites of the invasive species Bombus terrestris in 2004, 2005, and 2006. We fitted GP model coefficients to the field data of the foraging and nest sites, and used GP with the fitting coefficients. GP succeeded in predicting about 10-30% of actual nests. Sensitivity analysis showed that the predictability of the GP model mainly depended on the coefficient value of buffer zone, the distance at the mode of the foraging probability. GP will be able to predict the nest locations of bumble bees in other area by using the fitting coefficient values measured in this study. It will be possible to further improve the predictability of the GP model by considering food site preference and nest density. (c) 2010 Elsevier Ltd. All rights reserved.

  6. Decision theory and information propagation in quantum physics

    NASA Astrophysics Data System (ADS)

    Forrester, Alan

    In recent papers, Zurek [(2005). Probabilities from entanglement, Born's rule p k =| ψ k | 2 from entanglement. Physical Review A, 71, 052105] has objected to the decision-theoretic approach of Deutsch [(1999) Quantum theory of probability and decisions. Proceedings of the Royal Society of London A, 455, 3129-3137] and Wallace [(2003). Everettian rationality: defending Deutsch's approach to probability in the Everett interpretation. Studies in History and Philosophy of Modern Physics, 34, 415-438] to deriving the Born rule for quantum probabilities on the grounds that it courts circularity. Deutsch and Wallace assume that the many worlds theory is true and that decoherence gives rise to a preferred basis. However, decoherence arguments use the reduced density matrix, which relies upon the partial trace and hence upon the Born rule for its validity. Using the Heisenberg picture and quantum Darwinism-the notion that classical information is quantum information that can proliferate in the environment pioneered in Ollivier et al. [(2004). Objective properties from subjective quantum states: Environment as a witness. Physical Review Letters, 93, 220401 and (2005). Environment as a witness: Selective proliferation of information and emergence of objectivity in a quantum universe. Physical Review A, 72, 042113]-I show that measurement interactions between two systems only create correlations between a specific set of commuting observables of system 1 and a specific set of commuting observables of system 2. This argument picks out a unique basis in which information flows in the correlations between those sets of commuting observables. I then derive the Born rule for both pure and mixed states and answer some other criticisms of the decision theoretic approach to quantum probability.

  7. Invasion resistance arises in strongly interacting species-rich model competition communities.

    PubMed Central

    Case, T J

    1990-01-01

    I assemble stable multispecies Lotka-Volterra competition communities that differ in resident species number and average strength (and variance) of species interactions. These are then invaded with randomly constructed invaders drawn from the same distribution as the residents. The invasion success rate and the fate of the residents are determined as a function of community-and species-level properties. I show that the probability of colonization success for an invader decreases with community size and the average strength of competition (alpha). Communities composed of many strongly interacting species limit the invasion possibilities of most similar species. These communities, even for a superior invading competitor, set up a sort of "activation barrier" that repels invaders when they invade at low numbers. This "priority effect" for residents is not assumed a priori in my description for the individual population dynamics of these species; rather it emerges because species-rich and strongly interacting species sets have alternative stable states that tend to disfavor species at low densities. These models point to community-level rather than invader-level properties as the strongest determinant of differences in invasion success. The probability of extinction for a resident species increases with community size, and the probability of successful colonization by the invader decreases. Thus an equilibrium community size results wherein the probability of a resident species' extinction just balances the probability of an invader's addition. Given the distribution of alpha it is now possible to predict the equilibrium species number. The results provide a logical framework for an island-biogeographic theory in which species turnover is low even in the face of persistent invasions and for the protection of fragile native species from invading exotics. PMID:11607132

  8. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  9. Conditional Independence in Applied Probability.

    ERIC Educational Resources Information Center

    Pfeiffer, Paul E.

    This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…

  10. Third-Degree Price Discrimination Revisited

    ERIC Educational Resources Information Center

    Kwon, Youngsun

    2006-01-01

    The author derives the probability that price discrimination improves social welfare, using a simple model of third-degree price discrimination assuming two independent linear demands. The probability that price discrimination raises social welfare increases as the preferences or incomes of consumer groups become more heterogeneous. He derives the…

  11. The Misapplication of Probability Theory in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Racicot, Ronald

    2014-03-01

    This article is a revision of two papers submitted to the APS in the past two and a half years. In these papers, arguments and proofs are summarized for the following: (1) The wrong conclusion by EPR that Quantum Mechanics is incomplete, perhaps requiring the addition of ``hidden variables'' for completion. Theorems that assume such ``hidden variables,'' such as Bell's theorem, are also wrong. (2) Quantum entanglement is not a realizable physical phenomenon and is based entirely on assuming a probability superposition model for quantum spin. Such a model directly violates conservation of angular momentum. (3) Simultaneous multiple-paths followed by a quantum particle traveling through space also cannot possibly exist. Besides violating Noether's theorem, the multiple-paths theory is based solely on probability calculations. Probability calculations by themselves cannot possibly represent simultaneous physically real events. None of the reviews of the submitted papers actually refuted the arguments and evidence that was presented. These analyses should therefore be carefully evaluated since the conclusions reached have such important impact in quantum mechanics and quantum information theory.

  12. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  13. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  14. A critical analysis of high-redshift, massive, galaxy clusters. Part I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, Ben; Jimenez, Raul; Verde, Licia

    2012-02-01

    We critically investigate current statistical tests applied to high redshift clusters of galaxies in order to test the standard cosmological model and describe their range of validity. We carefully compare a sample of high-redshift, massive, galaxy clusters with realistic Poisson sample simulations of the theoretical mass function, which include the effect of Eddington bias. We compare the observations and simulations using the following statistical tests: the distributions of ensemble and individual existence probabilities (in the > M, > z sense), the redshift distributions, and the 2d Kolmogorov-Smirnov test. Using seemingly rare clusters from Hoyle et al. (2011), and Jee etmore » al. (2011) and assuming the same survey geometry as in Jee et al. (2011, which is less conservative than Hoyle et al. 2011), we find that the ( > M, > z) existence probabilities of all clusters are fully consistent with ΛCDM. However assuming the same survey geometry, we use the 2d K-S test probability to show that the observed clusters are not consistent with being the least probable clusters from simulations at > 95% confidence, and are also not consistent with being a random selection of clusters, which may be caused by the non-trivial selection function and survey geometry. Tension can be removed if we examine only a X-ray selected sub sample, with simulations performed assuming a modified survey geometry.« less

  15. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  16. Suppression of accretion on to low-mass Population III stars

    NASA Astrophysics Data System (ADS)

    Johnson, Jarrett L.; Khochfar, Sadegh

    2011-05-01

    Motivated by recent theoretical work suggesting that a substantial fraction of Population (Pop) III stars may have had masses low enough for them to survive to the present day, we consider the role that the accretion of metal-enriched gas may have had in altering their surface composition, thereby disguising them as Pop II stars. We demonstrate that if weak, solar-like winds are launched from low-mass Pop III stars formed in the progenitors of the dark matter halo of the Galaxy, then such stars are likely to avoid significant enrichment via accretion of material from the interstellar medium. We find that at early times accretion is easily prevented if the stars are ejected from the central regions of the haloes in which they form, either by dynamical interactions with more massive Pop III stars or by violent relaxation during halo mergers. While accretion may still take place during passage through sufficiently dense molecular clouds at later times, we find that the probability of such a passage is generally low (≲0.1), assuming that stars have velocities of the order of the maximum circular velocity of their host haloes and accounting for the orbital decay of merging haloes. In turn, due to the higher gas density required for accretion on to stars with higher velocities, we find an even lower probability of accretion (˜10-2) for the subset of Pop III stars formed at z > 10, which are more quickly incorporated into massive haloes than stars formed at lower redshift. While there is no a priori reason to assume that low-mass Pop III stars do not have solar-like winds, without them surface enrichment via accretion is likely to be inevitable. We briefly discuss the implications that our results hold for stellar archaeology.

  17. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  18. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  19. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.

    2016-08-25

    Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less

  20. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    PubMed

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  1. Dynamics of influence and social balance in spatially-embedded regular and random networks

    NASA Astrophysics Data System (ADS)

    Singh, P.; Sreenivasan, S.; Szymanski, B.; Korniss, G.

    2015-03-01

    Structural balance - the tendency of social relationship triads to prefer specific states of polarity - can be a fundamental driver of beliefs, behavior, and attitudes on social networks. Here we study how structural balance affects deradicalization in an otherwise polarized population of leftists and rightists constituting the nodes of a low-dimensional social network. Specifically, assuming an externally moderating influence that converts leftists or rightists to centrists with probability p, we study the critical value p =pc , below which the presence of metastable mixed population states exponentially delay the achievement of centrist consensus. Above the critical value, centrist consensus is the only fixed point. Complementing our previously shown results for complete graphs, we present results for the process on low-dimensional networks, and show that the low-dimensional embedding of the underlying network significantly affects the critical value of probability p. Intriguingly, on low-dimensional networks, the critical value pc can show non-monotonicity as the dimensionality of the network is varied. We conclude by analyzing the scaling behavior of temporal variation of unbalanced triad density in the network for different low-dimensional network topologies. Supported in part by ARL NS-CTA, ONR, and ARO.

  2. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  3. Bayesian probabilities for Mw 9.0+ earthquakes in the Aleutian Islands from a regionally scaled global rate

    NASA Astrophysics Data System (ADS)

    Butler, Rhett; Frazer, L. Neil; Templeton, William J.

    2016-05-01

    We use the global rate of Mw ≥ 9.0 earthquakes, and standard Bayesian procedures, to estimate the probability of such mega events in the Aleutian Islands, where they pose a significant risk to Hawaii. We find that the probability of such an earthquake along the Aleutians island arc is 6.5% to 12% over the next 50 years (50% credibility interval) and that the annualized risk to Hawai'i is about $30 M. Our method (the regionally scaled global rate method or RSGR) is to scale the global rate of Mw 9.0+ events in proportion to the fraction of global subduction (units of area per year) that takes place in the Aleutians. The RSGR method assumes that Mw 9.0+ events are a Poisson process with a rate that is both globally and regionally stationary on the time scale of centuries, and it follows the principle of Burbidge et al. (2008) who used the product of fault length and convergence rate, i.e., the area being subducted per annum, to scale the Poisson rate for the GSS to sections of the Indonesian subduction zone. Before applying RSGR to the Aleutians, we first apply it to five other regions of the global subduction system where its rate predictions can be compared with those from paleotsunami, paleoseismic, and geoarcheology data. To obtain regional rates from paleodata, we give a closed-form solution for the probability density function of the Poisson rate when event count and observation time are both uncertain.

  4. Continental-scale, seasonal movements of a heterothermic migratory tree bat

    USGS Publications Warehouse

    Cryan, Paul M.; Stricker, Craig A.; Wunder, Michael B.

    2014-01-01

    Long-distance migration evolved independently in bats and unique migration behaviors are likely, but because of their cryptic lifestyles, many details remain unknown. North American hoary bats (Lasiurus cinereus cinereus) roost in trees year-round and probably migrate farther than any other bats, yet we still lack basic information about their migration patterns and wintering locations or strategies. This information is needed to better understand unprecedented fatality of hoary bats at wind turbines during autumn migration and to determine whether the species could be susceptible to an emerging disease affecting hibernating bats. Our aim was to infer probable seasonal movements of individual hoary bats to better understand their migration and seasonal distribution in North America. We analyzed the stable isotope values of non-exchangeable hydrogen in the keratin of bat hair and combined isotopic results with prior distributional information to derive relative probability density surfaces for the geographic origins of individuals. We then mapped probable directions and distances of seasonal movement. Results indicate that hoary bats summer across broad areas. In addition to assumed latitudinal migration, we uncovered evidence of longitudinal movement by hoary bats from inland summering grounds to coastal regions during autumn and winter. Coastal regions with nonfreezing temperatures may be important wintering areas for hoary bats. Hoary bats migrating through any particular area, such as a wind turbine facility in autumn, are likely to have originated from a broad expanse of summering grounds from which they have traveled in no recognizable order. Better characterizing migration patterns and wintering behaviors of hoary bats sheds light on the evolution of migration and provides context for conserving these migrants.

  5. A consensus-based dynamics for market volumes

    NASA Astrophysics Data System (ADS)

    Sabatelli, Lorenzo; Richmond, Peter

    2004-12-01

    We develop a model of trading orders based on opinion dynamics. The agents may be thought as the share holders of a major mutual fund rather than as direct traders. The balance between their buy and sell orders determines the size of the fund order (volume) and has an impact on prices and indexes. We assume agents interact simultaneously to each other through a Sznajd-like interaction. Their degree of connection is determined by the probability of changing opinion independently of what their neighbours are doing. We assume that such a probability may change randomly, after each transaction, of an amount proportional to the relative difference between the volatility then measured and a benchmark that we assume to be an exponential moving average of the past volume values. We show how this simple model is compatible with some of the main statistical features observed for the asset volumes in financial markets.

  6. Detection of the earth with the SETI microwave observing system assumed to be operating out in the Galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, John; Tarter, Jill

    1989-01-01

    The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.

  7. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  8. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  9. Assessment of Template-Based Modeling of Protein Structure in CASP11

    PubMed Central

    Modi, Vivek; Xu, Qifang; Adhikari, Sam; Dunbrack, Roland L.

    2016-01-01

    We present the assessment of predictions submitted in the template-based modeling (TBM) category of CASP11 (Critical Assessment of Protein Structure Prediction). Model quality was judged on the basis of global and local measures of accuracy on all atoms including side chains. The top groups on 39 human-server targets based on model 1 predictions were LEER, Zhang, LEE, MULTICOM, and Zhang-Server. The top groups on 81 targets by server groups based on model 1 predictions were Zhang-Server, nns, BAKER-ROSETTASERVER, QUARK, and myprotein-me. In CASP11, the best models for most targets were equal to or better than the best template available in the Protein Data Bank, even for targets with poor templates. The overall performance in CASP11 is similar to the performance of predictors in CASP10 with slightly better performance on the hardest targets. For most targets, assessment measures exhibited bimodal probability density distributions. Multi-dimensional scaling of an RMSD matrix for each target typically revealed a single cluster with models similar to the target structure, with a mode in the GDT-TS density between 40 and 90, and a wide distribution of models highly divergent from each other and from the experimental structure, with density mode at a GDT-TS value of ~20. The models in this peak in the density were either compact models with entirely the wrong fold, or highly non-compact models. The results argue for a density-driven approach in future CASP TBM assessments that accounts for the bimodal nature of these distributions instead of Z-scores, which assume a unimodal, Gaussian distribution. PMID:27081927

  10. The frequency-domain approach for apparent density mapping

    NASA Astrophysics Data System (ADS)

    Tong, T.; Guo, L.

    2017-12-01

    Apparent density mapping is a technique to estimate density distribution in the subsurface layer from the observed gravity data. It has been widely applied for geologic mapping, tectonic study and mineral exploration for decades. Apparent density mapping usually models the density layer as a collection of vertical, juxtaposed prisms in both horizontal directions, whose top and bottom surfaces are assumed to be horizontal or variable-depth, and then inverts or deconvolves the gravity anomalies to determine the density of each prism. Conventionally, the frequency-domain approach, which assumes that both top and bottom surfaces of the layer are horizontal, is usually utilized for fast density mapping. However, such assumption is not always valid in the real world, since either the top surface or the bottom surface may be variable-depth. Here, we presented a frequency-domain approach for apparent density mapping, which permits both the top and bottom surfaces of the layer to be variable-depth. We first derived the formula for forward calculation of gravity anomalies caused by the density layer, whose top and bottom surfaces are variable-depth, and the formula for inversion of gravity anomalies for the density distribution. Then we proposed the procedure for density mapping based on both the formulas of inversion and forward calculation. We tested the approach on the synthetic data, which verified its effectiveness. We also tested the approach on the real Bouguer gravity anomalies data from the central South China. The top surface was assumed to be flat and was on the sea level, and the bottom surface was considered as the Moho surface. The result presented the crustal density distribution, which was coinciding well with the basic tectonic features in the study area.

  11. Incorporating Covariates into Stochastic Blockmodels

    ERIC Educational Resources Information Center

    Sweet, Tracy M.

    2015-01-01

    Social networks in education commonly involve some form of grouping, such as friendship cliques or teacher departments, and blockmodels are a type of statistical social network model that accommodate these grouping or blocks by assuming different within-group tie probabilities than between-group tie probabilities. We describe a class of models,…

  12. 40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...

  13. 40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...

  14. Independent Events in Elementary Probability Theory

    ERIC Educational Resources Information Center

    Csenki, Attila

    2011-01-01

    In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): If the n events E[subscript 1],…

  15. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  16. Particle Filter with State Permutations for Solving Image Jigsaw Puzzles

    PubMed Central

    Yang, Xingwei; Adluru, Nagesh; Latecki, Longin Jan

    2016-01-01

    We deal with an image jigsaw puzzle problem, which is defined as reconstructing an image from a set of square and non-overlapping image patches. It is known that a general instance of this problem is NP-complete, and it is also challenging for humans, since in the considered setting the original image is not given. Recently a graphical model has been proposed to solve this and related problems. The target label probability function is then maximized using loopy belief propagation. We also formulate the problem as maximizing a label probability function and use exactly the same pairwise potentials. Our main contribution is a novel inference approach in the sampling framework of Particle Filter (PF). Usually in the PF framework it is assumed that the observations arrive sequentially, e.g., the observations are naturally ordered by their time stamps in the tracking scenario. Based on this assumption, the posterior density over the corresponding hidden states is estimated. In the jigsaw puzzle problem all observations (puzzle pieces) are given at once without any particular order. Therefore, we relax the assumption of having ordered observations and extend the PF framework to estimate the posterior density by exploring different orders of observations and selecting the most informative permutations of observations. This significantly broadens the scope of applications of the PF inference. Our experimental results demonstrate that the proposed inference framework significantly outperforms the loopy belief propagation in solving the image jigsaw puzzle problem. In particular, the extended PF inference triples the accuracy of the label assignment compared to that using loopy belief propagation. PMID:27795660

  17. Electrical conductivity of the Earth's mantle after one year of SWARM magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Civet, François; Thebault, Erwan; Verhoeven, Olivier; Langlais, Benoit; Saturnino, Diana

    2015-04-01

    We present a global EM induction study using L1b Swarm satellite magnetic field measurements data down to a depth of 2000 km. Starting from raw measurements, we first derive a model for the main magnetic field, correct the data for a lithospheric field model, and further select the data to reduce the contributions of the ionospheric field. These computations allowed us to keep a full control on the data processes. We correct residual field from outliers and estimate the spherical harmonic coefficients of the transient field for periods between 2 and 256 days. We used full latitude range and all local times to keep a maximum amount of data. We perform a Bayesian inversion and construct a Markov chain during which model parameters are randomly updated at each iteration. We first consider regular layers of equal thickness and extra layers are added where conductivity contrast between successive layers exceed a threshold value. The mean and maximum likelihood of the electrical conductivity profile is then estimated from the probability density function. The obtained profile particularly shows a conductivity jump in the 600-700 km depth range, consistent with the olivine phase transition at 660 km depth. Our study is the first one to show such a conductivity increase in this depth range without any a priori informations on the internal strucutres. Assuming a pyrolitic mantle composition, this profile is interpreted in terms of temperature variations in the depth range where the probability density function is the narrowest. We finally obtained a temperature gradient in the lower mantle close to adiabatic.

  18. Vertical overlap of probability density functions of cloud and precipitation hydrometeors: CLOUD AND PRECIPITATION PDF OVERLAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.

    Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continentalmore » and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.« less

  19. A new approach to the problem of bulk-mediated surface diffusion.

    PubMed

    Berezhkovskii, Alexander M; Dagdug, Leonardo; Bezrukov, Sergey M

    2015-08-28

    This paper is devoted to bulk-mediated surface diffusion of a particle which can diffuse both on a flat surface and in the bulk layer above the surface. It is assumed that the particle is on the surface initially (at t = 0) and at time t, while in between it may escape from the surface and come back any number of times. We propose a new approach to the problem, which reduces its solution to that of a two-state problem of the particle transitions between the surface and the bulk layer, focusing on the cumulative residence times spent by the particle in the two states. These times are random variables, the sum of which is equal to the total observation time t. The advantage of the proposed approach is that it allows for a simple exact analytical solution for the double Laplace transform of the conditional probability density of the cumulative residence time spent on the surface by the particle observed for time t. This solution is used to find the Laplace transform of the particle mean square displacement and to analyze the peculiarities of its time behavior over the entire range of time. We also establish a relation between the double Laplace transform of the conditional probability density and the Fourier-Laplace transform of the particle propagator over the surface. The proposed approach treats the cases of both finite and infinite bulk layer thicknesses (where bulk-mediated surface diffusion is normal and anomalous at asymptotically long times, respectively) on equal footing.

  20. Low-Resolution Screening of Early Stage Acquisition Simulation Scenario Development Decisions

    DTIC Science & Technology

    2012-12-01

    6 seconds) incorporating reload times and assumptions. Phit for min range is assumed to be 100% (excepting FGM- 148, which was estimated for a...User Interface HTN Hierarchical Task Network MCCDC Marine Corps Combat Development Command Phit Probability to hit the intended target Pkill...well beyond the scope of this study. 5. Weapon Capabilities Translation COMBATXXI develops situation probabilities to hit ( Phit ) and probabilities to

  1. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  2. The density structure and star formation rate of non-isothermal polytropic turbulence

    NASA Astrophysics Data System (ADS)

    Federrath, Christoph; Banerjee, Supratik

    2015-04-01

    The interstellar medium of galaxies is governed by supersonic turbulence, which likely controls the star formation rate (SFR) and the initial mass function (IMF). Interstellar turbulence is non-universal, with a wide range of Mach numbers, magnetic fields strengths and driving mechanisms. Although some of these parameters were explored, most previous works assumed that the gas is isothermal. However, we know that cold molecular clouds form out of the warm atomic medium, with the gas passing through chemical and thermodynamic phases that are not isothermal. Here we determine the role of temperature variations by modelling non-isothermal turbulence with a polytropic equation of state (EOS), where pressure and temperature are functions of gas density, P˜ ρ ^Γ, T ˜ ρΓ - 1. We use grid resolutions of 20483 cells and compare polytropic exponents Γ = 0.7 (soft EOS), Γ = 1 (isothermal EOS) and Γ = 5/3 (stiff EOS). We find a complex network of non-isothermal filaments with more small-scale fragmentation occurring for Γ < 1, while Γ > 1 smoothes out density contrasts. The density probability distribution function (PDF) is significantly affected by temperature variations, with a power-law tail developing at low densities for Γ > 1. In contrast, the PDF becomes closer to a lognormal distribution for Γ ≲ 1. We derive and test a new density variance-Mach number relation that takes Γ into account. This new relation is relevant for theoretical models of the SFR and IMF, because it determines the dense gas mass fraction of a cloud, from which stars form. We derive the SFR as a function of Γ and find that it decreases by a factor of ˜5 from Γ = 0.7 to 5/3.

  3. Probability of survival during accidental immersion in cold water.

    PubMed

    Wissler, Eugene H

    2003-01-01

    Estimating the probability of survival during accidental immersion in cold water presents formidable challenges for both theoreticians and empirics. A number of theoretical models have been developed assuming that death occurs when the central body temperature, computed using a mathematical model, falls to a certain level. This paper describes a different theoretical approach to estimating the probability of survival. The human thermal model developed by Wissler is used to compute the central temperature during immersion in cold water. Simultaneously, a survival probability function is computed by solving a differential equation that defines how the probability of survival decreases with increasing time. The survival equation assumes that the probability of occurrence of a fatal event increases as the victim's central temperature decreases. Generally accepted views of the medical consequences of hypothermia and published reports of various accidents provide information useful for defining a "fatality function" that increases exponentially with decreasing central temperature. The particular function suggested in this paper yields a relationship between immersion time for 10% probability of survival and water temperature that agrees very well with Molnar's empirical observations based on World War II data. The method presented in this paper circumvents a serious difficulty with most previous models--that one's ability to survive immersion in cold water is determined almost exclusively by the ability to maintain a high level of shivering metabolism.

  4. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  5. The probability density function (PDF) of Lagrangian Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, B.

    2012-12-01

    The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the differential equation determining the generalized hyperbolic distributions. Then we compare these PDFs with DNS results and experimental data.

  6. Competing contact processes in the Watts-Strogatz network

    NASA Astrophysics Data System (ADS)

    Rybak, Marcin; Malarz, Krzysztof; Kułakowski, Krzysztof

    2016-06-01

    We investigate two competing contact processes on a set of Watts-Strogatz networks with the clustering coefficient tuned by rewiring. The base for network construction is one-dimensional chain of N sites, where each site i is directly linked to nodes labelled as i ± 1 and i ± 2. So initially, each node has the same degree k i = 4. The periodic boundary conditions are assumed as well. For each node i the links to sites i + 1 and i + 2 are rewired to two randomly selected nodes so far not-connected to node i. An increase of the rewiring probability q influences the nodes degree distribution and the network clusterization coefficient 𝓒. For given values of rewiring probability q the set 𝓝(q)={𝓝1,𝓝2,...,𝓝 M } of M networks is generated. The network's nodes are decorated with spin-like variables s i ∈ { S,D }. During simulation each S node having a D-site in its neighbourhood converts this neighbour from D to S state. Conversely, a node in D state having at least one neighbour also in state D-state converts all nearest-neighbours of this pair into D-state. The latter is realized with probability p. We plot the dependence of the nodes S final density n S T on initial nodes S fraction n S 0. Then, we construct the surface of the unstable fixed points in (𝓒, p, n S 0) space. The system evolves more often toward n S T for (𝓒, p, n S 0) points situated above this surface while starting simulation with (𝓒, p, n S 0) parameters situated below this surface leads system to n S T =0. The points on this surface correspond to such value of initial fraction n S * of S nodes (for fixed values 𝓒 and p) for which their final density is n S T=1/2.

  7. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  8. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  9. Evaluation of an Ensemble Dispersion Calculation.

    NASA Astrophysics Data System (ADS)

    Draxler, Roland R.

    2003-02-01

    A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.

  10. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    NASA Astrophysics Data System (ADS)

    Vašina, P; Hytková, T; Eliáš, M

    2009-05-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  11. 2024 Unmanned Undersea Warfare Concept

    DTIC Science & Technology

    2013-06-01

    mine. Assumptions are that the high-tech mine would have a 400 - meter range that spans 360 degrees, a 90% probability of detecting a HVU, and a 30...motor volume – The electric propulsion motor is assumed to be 0.127 cubic meters . A common figure of 24” x 18” x 18” is assumed. This size will allow...regard to propagation loss is assumed to be 400 HZ. Using Excel spreadsheet modeling, the maximum range is determined by finding that range resulting in

  12. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  13. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  14. Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.

    A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).

  15. Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem

    NASA Astrophysics Data System (ADS)

    Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad

    2013-12-01

    In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.

  16. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  17. Long-Term Fault Memory: A New Time-Dependent Recurrence Model for Large Earthquake Clusters on Plate Boundaries

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.

    2017-12-01

    A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would appear quasiperiodic, while at other times, the events can appear more Poissonian. Hence a given paleoseismic or instrumental record may not reflect the long-term seismicity of a fault, which has important implications for hazard assessment.

  18. Theoretical model for scattering of radar signals in Ku- and C-bands from a rough sea surface with breaking waves

    NASA Astrophysics Data System (ADS)

    Voronovich, A. G.; Zavorotny, V. U.

    2001-07-01

    A small-slope approximation (SSA) is used for numerical calculations of a radar backscattering cross section of the ocean surface for both Ku- and C-bands for various wind speeds and incident angles. Both the lowest order of the SSA and the one that includes the next-order correction to it are considered. The calculations were made by assuming the surface-height spectrum of Elfouhaily et al for fully developed seas. Empirical scattering models CMOD2-I3 and SASS-II are used for comparison. Theoretical calculations are in good overall agreement with the experimental data represented by the empirical models, with the exception of HH-polarization in the upwind direction. It was assumed that steep breaking waves are responsible for this effect, and the probability density function of large slopes was calculated based on this assumption. The logarithm of this function in the upwind direction can be approximated by a linear combination of wind speed and the appropriate slope. The resulting backscattering cross section for upwind, downwind and cross-wind directions, for winds ranging between 5 and 15 m s-1, and for both polarizations in both wave bands corresponds to experimental results within 1-2 dB accuracy.

  19. Random analysis of bearing capacity of square footing using the LAS procedure

    NASA Astrophysics Data System (ADS)

    Kawa, Marek; Puła, Wojciech; Suska, Michał

    2016-09-01

    In the present paper, a three-dimensional problem of bearing capacity of square footing on random soil medium is analyzed. The random fields of strength parameters c and φ are generated using LAS procedure (Local Average Subdivision, Fenton and Vanmarcke 1990). The procedure used is re-implemented by the authors in Mathematica environment in order to combine it with commercial program. Since the procedure is still tested the random filed has been assumed as one-dimensional: the strength properties of soil are random in vertical direction only. Individual realizations of bearing capacity boundary-problem with strength parameters of medium defined the above procedure are solved using FLAC3D Software. The analysis is performed for two qualitatively different cases, namely for the purely cohesive and cohesive-frictional soils. For the latter case the friction angle and cohesion have been assumed as independent random variables. For these two cases the random square footing bearing capacity results have been obtained for the range of fluctuation scales from 0.5 m to 10 m. Each time 1000 Monte Carlo realizations have been performed. The obtained results allow not only the mean and variance but also the probability density function to be estimated. An example of application of this function for reliability calculation has been presented in the final part of the paper.

  20. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  1. Early evolution of an X-ray emitting solar active region

    NASA Technical Reports Server (NTRS)

    Wolfson, C. J.; Acton, L. W.; Leibacher, J. W.; Roethig, D. T.

    1977-01-01

    The birth and early evolution of a solar active region has been investigated using X-ray observations from the mapping X-ray heliometer on board the OSO-8 spacecraft. X-ray emission is observed within three hours of the first detection of H-alpha plage. At that time, a plasma temperature of four million K in a region having a density on the order of 10 to the 10th power per cu cm is inferred. During the fifty hours following birth almost continuous flares or flare-like X-ray bursts are superimposed on a monotonically increasing base level of X-ray emission produced by the plasma. If the X-rays are assumed to result from heating due to dissipation of current systems or magnetic field reconnection, it may be concluded that flare-like X-ray emission soon after active region birth implies that the magnetic field probably emerges in a stressed or complex configuration.

  2. Probabilistic Analysis of Large-Scale Composite Structures Using the IPACS Code

    NASA Technical Reports Server (NTRS)

    Lemonds, Jeffrey; Kumar, Virendra

    1995-01-01

    An investigation was performed to ascertain the feasibility of using IPACS (Integrated Probabilistic Assessment of Composite Structures) for probabilistic analysis of a composite fan blade, the development of which is being pursued by various industries for the next generation of aircraft engines. A model representative of the class of fan blades used in the GE90 engine has been chosen as the structural component to be analyzed with IPACS. In this study, typical uncertainties are assumed in the level, and structural responses for ply stresses and frequencies are evaluated in the form of cumulative probability density functions. Because of the geometric complexity of the blade, the number of plies varies from several hundred at the root to about a hundred at the tip. This represents a extremely complex composites application for the IPACS code. A sensitivity study with respect to various random variables is also performed.

  3. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  4. Electrical conductivity of the Earth's mantle from the first Swarm magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Civet, F.; Thébault, E.; Verhoeven, O.; Langlais, B.; Saturnino, D.

    2015-05-01

    We present a 1-D electrical conductivity profile of the Earth's mantle down to 2000 km derived from L1b Swarm satellite magnetic field measurements from November 2013 to September 2014. We first derive a model for the main magnetic field, correct the data for a lithospheric field model, and additionally select the data to reduce the contributions of the ionospheric field. We then model the primary and induced magnetospheric fields for periods between 2 and 256 days and perform a Bayesian inversion to obtain the probability density function for the electrical conductivity as function of depth. The conductivity increases by 3 orders of magnitude in the 400-900 km depth range. Assuming a pyrolitic mantle composition, this profile is interpreted in terms of temperature variations leading to a temperature gradient in the lower mantle that is close to adiabatic.

  5. What distinguishes individual stocks from the index?

    NASA Astrophysics Data System (ADS)

    Wagner, F.; Milaković, M.; Alfarano, S.

    2010-01-01

    Stochastic volatility models decompose the time series of financial returns into the product of a volatility factor and an iid noise factor. Assuming a slow dynamic for the volatility factor, we show via nonparametric tests that both the index as well as its individual stocks share a common volatility factor. While the noise component is Gaussian for the index, individual stock returns turn out to require a leptokurtic noise. Thus we propose a two-component model for stocks, given by the sum of Gaussian noise, which reflects market-wide fluctuations, and Laplacian noise, which incorporates firm-specific factors such as firm profitability or growth performance, both of which are known to be Laplacian distributed. In the case of purely Gaussian noise, the chi-squared probability for the density of individual stock returns is typically on the order of 10-20, while it increases to values of O(1) by adding the Laplace component.

  6. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  7. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  8. Quantification of brain tissue through incorporation of partial volume effects

    NASA Astrophysics Data System (ADS)

    Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.

    1992-06-01

    This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.

  9. Relative performance of selected detectors

    NASA Astrophysics Data System (ADS)

    Ranney, Kenneth I.; Khatri, Hiralal; Nguyen, Lam H.; Sichina, Jeffrey

    2000-08-01

    The quadratic polynomial detector (QPD) and the radial basis function (RBF) family of detectors -- including the Bayesian neural network (BNN) -- might well be considered workhorses within the field of automatic target detection (ATD). The QPD works reasonably well when the data is unimodal, and it also achieves the best possible performance if the underlying data follow a Gaussian distribution. The BNN, on the other hand, has been applied successfully in cases where the underlying data are assumed to follow a multimodal distribution. We compare the performance of a BNN detector and a QPD for various scenarios synthesized from a set of Gaussian probability density functions (pdfs). This data synthesis allows us to control parameters such as modality and correlation, which, in turn, enables us to create data sets that can probe the weaknesses of the detectors. We present results for different data scenarios and different detector architectures.

  10. Bone histological correlates of soaring and high-frequency flapping flight in the furculae of birds.

    PubMed

    Mitchell, Jessica; Legendre, Lucas J; Lefèvre, Christine; Cubo, Jorge

    2017-06-01

    The furcula is a specialized bone in birds involved in flight function. Its morphology has been shown to reflect different flight styles from soaring/gliding birds, subaqueous flight to high-frequency flapping flyers. The strain experienced by furculae can vary depending on flight type. Bone remodeling is a response to damage incurred from different strain magnitudes and types. In this study, we tested whether a bone microstructural feature, namely Haversian bone density, differs in birds with different flight styles, and reassessed previous work using phylogenetic comparative methods that assume an evolutionary model with additional taxa. We show that soaring birds have higher Haversian bone densities than birds with a flapping style of flight. This result is probably linked to the fact that the furculae of soaring birds provide less protraction force and more depression force than furculae of birds showing other kinds of flight. The whole bone area is another explanatory factor, which confirms the fact that size is an important consideration in Haversian bone development. All birds, however, display Haversian bone development in their furculae, and other factors like age could be affecting the response of Haversian bone development. Copyright © 2017 Elsevier GmbH. All rights reserved.

  11. Kinetic effects in InP nanowire growth and stacking fault formation: the role of interface roughening.

    PubMed

    Chiaramonte, Thalita; Tizei, Luiz H G; Ugarte, Daniel; Cotta, Mônica A

    2011-05-11

    InP nanowire polytypic growth was thoroughly studied using electron microscopy techniques as a function of the In precursor flow. The dominant InP crystal structure is wurtzite, and growth parameters determine the density of stacking faults (SF) and zinc blende segments along the nanowires (NWs). Our results show that SF formation in InP NWs cannot be univocally attributed to the droplet supersaturation, if we assume this variable to be proportional to the ex situ In atomic concentration at the catalyst particle. An imbalance between this concentration and the axial growth rate was detected for growth conditions associated with larger SF densities along the NWs, suggesting a different route of precursor incorporation at the triple phase line in that case. The formation of SFs can be further enhanced by varying the In supply during growth and is suppressed for small diameter NWs grown under the same conditions. We attribute the observed behaviors to kinetically driven roughening of the semiconductor/metal interface. The consequent deformation of the triple phase line increases the probability of a phase change at the growth interface in an effort to reach local minima of system interface and surface energy.

  12. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  13. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  14. Attributes of seasonal home range influence choice of migratory strategy in white-tailed deer

    USGS Publications Warehouse

    Henderson, Charles R.; Mitchell, Michael S.; Myers, Woodrow L.; Lukacs, Paul M.; Nelson, Gerald P.

    2018-01-01

    Partial migration is a common life-history strategy among ungulates living in seasonal environments. The decision to migrate or remain on a seasonal range may be influenced strongly by access to high-quality habitat. We evaluated the influence of access to winter habitat of high quality on the probability of a female white-tailed deer (Odocoileus virginianus) migrating to a separate summer range and the effects of this decision on survival. We hypothesized that deer with home ranges of low quality in winter would have a high probability of migrating, and that survival of an individual in winter would be influenced by the quality of their home range in winter. We radiocollared 67 female white-tailed deer in 2012 and 2013 in eastern Washington, United States. We estimated home range size in winter using a kernel density estimator; we assumed the size of the home range was inversely proportional to its quality and the proportion of crop land within the home range was proportional to its quality. Odds of migrating from winter ranges increased by 3.1 per unit increase in home range size and decreased by 0.29 per unit increase in the proportion of crop land within a home range. Annual survival rate for migrants was 0.85 (SD = 0.05) and 0.84 (SD = 0.09) for residents. Our finding that an individual with a low-quality home range in winter is likely to migrate to a separate summer range accords with the hypothesis that competition for a limited amount of home ranges of high quality should result in residents having home ranges of higher quality than migrants in populations experiencing density dependence. We hypothesize that density-dependent competition for high-quality home ranges in winter may play a leading role in the selection of migration strategy by female white-tailed deer.

  15. Intervening O vi Quasar Absorption Systems at Low Redshift: A Significant Baryon Reservoir.

    PubMed

    Tripp; Savage; Jenkins

    2000-05-01

    Far-UV echelle spectroscopy of the radio-quiet QSO H1821+643 (zem=0.297), obtained with the Space Telescope Imaging Spectrograph (STIS) at approximately 7 km s-1 resolution, reveals four definite O vi absorption-line systems and one probable O vi absorber at 0.15

  16. Models of epidemics: when contact repetition and clustering should be included

    PubMed Central

    Smieszek, Timo; Fiebig, Lena; Scholz, Roland W

    2009-01-01

    Background The spread of infectious disease is determined by biological factors, e.g. the duration of the infectious period, and social factors, e.g. the arrangement of potentially contagious contacts. Repetitiveness and clustering of contacts are known to be relevant factors influencing the transmission of droplet or contact transmitted diseases. However, we do not yet completely know under what conditions repetitiveness and clustering should be included for realistically modelling disease spread. Methods We compare two different types of individual-based models: One assumes random mixing without repetition of contacts, whereas the other assumes that the same contacts repeat day-by-day. The latter exists in two variants, with and without clustering. We systematically test and compare how the total size of an outbreak differs between these model types depending on the key parameters transmission probability, number of contacts per day, duration of the infectious period, different levels of clustering and varying proportions of repetitive contacts. Results The simulation runs under different parameter constellations provide the following results: The difference between both model types is highest for low numbers of contacts per day and low transmission probabilities. The number of contacts and the transmission probability have a higher influence on this difference than the duration of the infectious period. Even when only minor parts of the daily contacts are repetitive and clustered can there be relevant differences compared to a purely random mixing model. Conclusion We show that random mixing models provide acceptable estimates of the total outbreak size if the number of contacts per day is high or if the per-contact transmission probability is high, as seen in typical childhood diseases such as measles. In the case of very short infectious periods, for instance, as in Norovirus, models assuming repeating contacts will also behave similarly as random mixing models. If the number of daily contacts or the transmission probability is low, as assumed for MRSA or Ebola, particular consideration should be given to the actual structure of potentially contagious contacts when designing the model. PMID:19563624

  17. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  18. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  19. Predictions from star formation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Leichenauer, Stefan

    2010-03-15

    We compute trivariate probability distributions in the landscape, scanning simultaneously over the cosmological constant, the primordial density contrast, and spatial curvature. We consider two different measures for regulating the divergences of eternal inflation, and three different models for observers. In one model, observers are assumed to arise in proportion to the entropy produced by stars; in the others, they arise at a fixed time (5 or 10x10{sup 9} years) after star formation. The star formation rate, which underlies all our observer models, depends sensitively on the three scanning parameters. We employ a recently developed model of star formation in themore » multiverse, a considerable refinement over previous treatments of the astrophysical and cosmological properties of different pocket universes. For each combination of observer model and measure, we display all single and bivariate probability distributions, both with the remaining parameter(s) held fixed and marginalized. Our results depend only weakly on the observer model but more strongly on the measure. Using the causal diamond measure, the observed parameter values (or bounds) lie within the central 2{sigma} of nearly all probability distributions we compute, and always within 3{sigma}. This success is encouraging and rather nontrivial, considering the large size and dimension of the parameter space. The causal patch measure gives similar results as long as curvature is negligible. If curvature dominates, the causal patch leads to a novel runaway: it prefers a negative value of the cosmological constant, with the smallest magnitude available in the landscape.« less

  20. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  1. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  2. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  3. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  4. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  5. An Analytic Form for the Interresponse Time Analysis of Shull, Gaynor, and Grimes with Applications and Extensions

    ERIC Educational Resources Information Center

    Kessel, Robert; Lucke, Robert L.

    2008-01-01

    Shull, Gaynor and Grimes advanced a model for interresponse time distribution using probabilistic cycling between a higher-rate and a lower-rate response process. Both response processes are assumed to be random in time with a constant rate. The cycling between the two processes is assumed to have a constant transition probability that is…

  6. Continuous Strategy Development for Effects-Based Operations

    DTIC Science & Technology

    2006-02-01

    the probability of COA success. The time slider from the “Time Selector” choice in the View menu may also be used to animate the probability coloring...will Deploy WMD, since this can be assumed to have the inverse probability (1-P) of our objective. Clausewitz theory teaches us that an enemy must be... using XSLT, a concise language for transforming XML documents, for forward and reverse conversion between the SDT and SMS plan formats. 2. Develop a

  7. Hydrodynamic Flow Fluctuations in √sNN = 5:02 TeV PbPbCollisions

    NASA Astrophysics Data System (ADS)

    Castle, James R.

    The collective, anisotropic expansion of the medium created in ultrarelativistic heavy-ion collisions, known as flow, is characterized through a Fourier expansion of the final-state azimuthal particle density. In the Fourier expansion, flow harmonic coefficients vn correspond to shape components in the final-state particle density, which are a consequence of similar spatial anisotropies in the initial-state transverse energy density of a collision. Flow harmonic fluctuations are studied for PbPb collisions at √sNN = 5.02 TeV using the CMS detector at the CERN LHC. Flow harmonic probability distributions p( vn) are obtained using particles with 0.3 < pT < 3.0 GeV/c and ∥eta∥ < 1.0 by removing finite-multiplicity resolution effects from the observed azimuthal particle density through an unfolding procedure. Cumulant elliptic flow harmonics (n = 2) are determined from the moments of the unfolded p(v2) distributions and used to construct observables in 5% wide centrality bins up to 60% that relate to the initial-state spatial anisotropy. Hydrodynamic models predict that fluctuations in the initial-state transverse energy density will lead to a non-Gaussian component in the elliptic flow probability distributions that manifests as a negative skewness. A statistically significant negative skewness is observed for all centrality bins as evidenced by a splitting between the higher-order cumulant elliptic flow harmonics. The unfolded p (v2) distributions are transformed assuming a linear relationship between the initial-state spatial anisotropy and final-state flow and are fitted with elliptic power law and Bessel Gaussian parametrizations to infer information on the nature of initial-state fluctuations. The elliptic power law parametrization is found to provide a more accurate description of the fluctuations than the Bessel-Gaussian parametrization. In addition, the event-shape engineering technique, where events are further divided into classes based on an observed ellipticity, is used to study fluctuation-driven differences in the initial-state spatial anisotropy for a given collision centrality that would otherwise be destroyed by event-averaging techniques. Correlations between the first and second moments of p( vn) distributions and event ellipticity are measured for harmonic orders n = 2 - 4 by coupling event-shape engineering to the unfolding technique.

  8. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  9. Geometric Probability. New Topics for Secondary School Mathematics. Materials and Software.

    ERIC Educational Resources Information Center

    National Council of Teachers of Mathematics, Inc., Reston, VA.

    These materials on geometric probability are the first unit in a course being developed by the Mathematics Department at the North Carolina School of Science and Mathematics. This course is designed to prepare high school students who have completed Algebra 2 for the variety of math courses they will encounter in college. Assuming only a knowledge…

  10. Predicting the probability of slip in gait: methodology and distribution study.

    PubMed

    Gragg, Jared; Yang, James

    2016-01-01

    The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.

  11. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  12. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  13. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.

  14. Optimum aim point biasing in case of a planetary quarantine constraint.

    NASA Technical Reports Server (NTRS)

    Gedeon, G. S.; Dvornychenko, V. N.

    1972-01-01

    It is assumed that the probability of impact for each maneuver is the same, and that the aspects of orbit determination and execution errors of each maneuver affect only the targeting. An approximation of the equal probability of impact contour is derived. It is assumed that the quarantine constraint is satisfied if the aim point is not inside the impact contour. A method is devised to find on each contour the optimum aim point which minimizes the so-called bias velocity which is required to bring back the spacecraft from the biased aim point to the originally desired aim point. The method is an improvement over the approach presented by Light (1965), and Craven and Wolfson (1967).

  15. Accounting for unsearched areas in estimating wind turbine-caused fatality

    USGS Publications Warehouse

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    With wind energy production expanding rapidly, concerns about turbine-induced bird and bat fatality have grown and the demand for accurate estimation of fatality is increasing. Estimation typically involves counting carcasses observed below turbines and adjusting counts by estimated detection probabilities. Three primary sources of imperfect detection are 1) carcasses fall into unsearched areas, 2) carcasses are removed or destroyed before sampling, and 3) carcasses present in the searched area are missed by observers. Search plots large enough to comprise 100% of turbine-induced fatality are expensive to search and may nonetheless contain areas unsearchable because of dangerous terrain or impenetrable brush. We evaluated models relating carcass density to distance from the turbine to estimate the proportion of carcasses expected to fall in searched areas and evaluated the statistical cost of restricting searches to areas near turbines where carcass density is highest and search conditions optimal. We compared 5 estimators differing in assumptions about the relationship of carcass density to distance from the turbine. We tested them on 6 different carcass dispersion scenarios at each of 3 sites under 2 different search regimes. We found that even simple distance-based carcass-density models were more effective at reducing bias than was a 5-fold expansion of the search area. Estimators incorporating fitted rather than assumed models were least biased, even under restricted searches. Accurate estimates of fatality at wind-power facilities will allow critical comparisons of rates among turbines, sites, and regions and contribute to our understanding of the potential environmental impact of this technology.

  16. Stochastic modelling of wall stresses in abdominal aortic aneurysms treated by a gene therapy.

    PubMed

    Mohand-Kaci, Faïza; Ouni, Anissa Eddhahak; Dai, Jianping; Allaire, Eric; Zidi, Mustapha

    2012-01-01

    A stochastic mechanical model using the membrane theory was used to simulate the in vivo mechanical behaviour of abdominal aortic aneurysms (AAAs) in order to compute the wall stresses after stabilisation by gene therapy. For that, both length and diameter of AAAs rats were measured during their expansion. Four groups of animals, control and treated by an endovascular gene therapy during 3 or 28 days were included. The mechanical problem was solved analytically using the geometric parameters and assuming the shape of aneurysms by a 'parabolic-exponential curve'. When compared to controls, stress variations in the wall of AAAs for treated arteries during 28 days decreased, while they were nearly constant at day 3. The measured geometric parameters of AAAs were then investigated using probability density functions (pdf) attributed to every random variable. Different trials were useful to define a reliable confidence region in which the probability to have a realisation is equal to 99%. The results demonstrated that the error in the estimation of the stresses can be greater than 28% when parameters uncertainties are not considered in the modelling. The relevance of the proposed approach for the study of AAA growth may be studied further and extended to other treatments aimed at stabilisation AAAs, using biotherapies and pharmacological approaches.

  17. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method.

    PubMed

    Norris, Peter M; da Silva, Arlindo M

    2016-07-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  18. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    NASA Technical Reports Server (NTRS)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  19. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    PubMed Central

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  20. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  1. A computational model for biosonar echoes from foliage

    PubMed Central

    Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631

  2. A computational model for biosonar echoes from foliage.

    PubMed

    Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.

  3. A Concise Introduction to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Swanson, Mark S.

    2018-02-01

    Assuming a background in basic classical physics, multivariable calculus, and differential equations, A Concise Introduction to Quantum Mechanics provides a self-contained presentation of the mathematics and physics of quantum mechanics. The relevant aspects of classical mechanics and electrodynamics are reviewed, and the basic concepts of wave-particle duality are developed as a logical outgrowth of experiments involving blackbody radiation, the photoelectric effect, and electron diffraction. The Copenhagen interpretation of the wave function and its relation to the particle probability density is presented in conjunction with Fourier analysis and its generalization to function spaces. These concepts are combined to analyze the system consisting of a particle confined to a box, developing the probabilistic interpretation of observations and their associated expectation values. The Schrödinger equation is then derived by using these results and demanding both Galilean invariance of the probability density and Newtonian energy-momentum relations. The general properties of the Schrödinger equation and its solutions are analyzed, and the theory of observables is developed along with the associated Heisenberg uncertainty principle. Basic applications of wave mechanics are made to free wave packet spreading, barrier penetration, the simple harmonic oscillator, the Hydrogen atom, and an electric charge in a uniform magnetic field. In addition, Dirac notation, elements of Hilbert space theory, operator techniques, and matrix algebra are presented and used to analyze coherent states, the linear potential, two state oscillations, and electron diffraction. Applications are made to photon and electron spin and the addition of angular momentum, and direct product multiparticle states are used to formulate both the Pauli exclusion principle and quantum decoherence. The book concludes with an introduction to the rotation group and the general properties of angular momentum.

  4. Does prescribed fire promote resistance to drought in low elevation forests of the Sierra Nevada, California, USA?

    USGS Publications Warehouse

    van Mantgem, Phillip J.; Caprio, Anthony C.; Stephenson, Nathan L.; Das, Adrian J.

    2016-01-01

    Prescribed fire is a primary tool used to restore western forests following more than a century of fire exclusion, reducing fire hazard by removing dead and live fuels (small trees and shrubs).  It is commonly assumed that the reduced forest density following prescribed fire also reduces competition for resources among the remaining trees, so that the remaining trees are more resistant (more likely to survive) in the face of additional stressors, such as drought.  Yet this proposition remains largely untested, so that managers do not have the basic information to evaluate whether prescribed fire may help forests adapt to a future of more frequent and severe drought.During the third year of drought, in 2014, we surveyed 9950 trees in 38 burned and 18 unburned mixed conifer forest plots at low elevation (<2100 m a.s.l.) in Kings Canyon, Sequoia, and Yosemite national parks in California, USA.  Fire had occurred in the burned plots from 6 yr to 28 yr before our survey.  After accounting for differences in individual tree diameter, common conifer species found in the burned plots had significantly reduced probability of mortality compared to unburned plots during the drought.  Stand density (stems ha-1) was significantly lower in burned versus unburned sites, supporting the idea that reduced competition may be responsible for the differential drought mortality response.  At the time of writing, we are not sure if burned stands will maintain lower tree mortality probabilities in the face of the continued, severe drought of 2015.  Future work should aim to better identify drought response mechanisms and how these may vary across other forest types and regions, particularly in other areas experiencing severe drought in the Sierra Nevada and on the Colorado Plateau.

  5. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  6. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  7. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  8. Ionization balance in Titan's nightside ionosphere

    NASA Astrophysics Data System (ADS)

    Vigren, E.; Galand, M.; Yelle, R. V.; Wellbrock, A.; Coates, A. J.; Snowden, D.; Cui, J.; Lavvas, P.; Edberg, N. J. T.; Shebanits, O.; Wahlund, J.-E.; Vuitton, V.; Mandt, K.

    2015-03-01

    Based on a multi-instrumental Cassini dataset we make model versus observation comparisons of plasma number densities, nP = (nenI)1/2 (ne and nI being the electron number density and total positive ion number density, respectively) and short-lived ion number densities (N+, CH2+, CH3+, CH4+) in the southern hemisphere of Titan's nightside ionosphere over altitudes ranging from 1100 and 1200 km and from 1100 to 1350 km, respectively. The nP model assumes photochemical equilibrium, ion-electron pair production driven by magnetospheric electron precipitation and dissociative recombination as the principal plasma neutralization process. The model to derive short-lived-ion number densities assumes photochemical equilibrium for the short-lived ions, primary ion production by electron-impact ionization of N2 and CH4 and removal of the short-lived ions through reactions with CH4. It is shown that the models reasonably reproduce the observations, both with regards to nP and the number densities of the short-lived ions. This is contrasted by the difficulties in accurately reproducing ion and electron number densities in Titan's sunlit ionosphere.

  9. Quantum probability rule: a generalization of the theorems of Gleason and Busch

    NASA Astrophysics Data System (ADS)

    Barnett, Stephen M.; Cresser, James D.; Jeffers, John; Pegg, David T.

    2014-04-01

    Busch's theorem deriving the standard quantum probability rule can be regarded as a more general form of Gleason's theorem. Here we show that a further generalization is possible by reducing the number of quantum postulates used by Busch. We do not assume that the positive measurement outcome operators are effects or that they form a probability operator measure. We derive a more general probability rule from which the standard rule can be obtained from the normal laws of probability when there is no measurement outcome information available, without the need for further quantum postulates. Our general probability rule has prediction-retrodiction symmetry and we show how it may be applied in quantum communications and in retrodictive quantum theory.

  10. A Stochastic Model For Extracting Sediment Delivery Timescales From Sediment Budgets

    NASA Astrophysics Data System (ADS)

    Pizzuto, J. E.; Benthem, A.; Karwan, D. L.; Keeler, J. J.; Skalak, K.

    2015-12-01

    Watershed managers need to quantify sediment storage and delivery timescales to understand the time required for best management practices to improve downstream water quality. To address this need, we route sediment downstream using a random walk through a series of valley compartments spaced at 1 km intervals. The probability of storage within each compartment, q, is specified from a sediment budget and is defined as the ratio of the volume deposited to the annual sediment flux. Within each compartment, the probability of sediment moving directly downstream without being stored is p=1-q. If sediment is stored within a compartment, its "resting time" is specified by a stochastic exponential waiting time distribution with a mean of 10 years. After a particle's waiting time is over, it moves downstream to the next compartment by fluvial transport. Over a distance of "n" compartments, a sediment particle may be stored from 0 to n times with the probability of each outcome (store or not store) specified by the binomial distribution. We assign q = 0.02, a stream velocity of 0.5 m/s, an event "intermittency "of 0.01, and assume a balanced sediment budget. Travel time probability density functions have a steep peak at the shortest times, representing rapid transport in the channel of the fraction of sediment that moves downstream without being stored. However, the probability of moving downstream "n" km without storage is pn (0.90 for 5 km, 0.36 for 50 km, 0.006 for 250 km), so travel times are increasingly dominated by storage with increasing distance. Median travel times for 5, 50, and 250 km are 0.03, 4.4, and 46.5 years. After a distance of approximately 2/q or 100 km (2/0.02/km), the median travel time is determined by storage timescales, and active fluvial transport is irrelevant. Our model extracts travel time statistics from sediment budgets, and can be cast as a differential equation and solved numerically for more complex systems.

  11. Conditional, Time-Dependent Probabilities for Segmented Type-A Faults in the WGCEP UCERF 2

    USGS Publications Warehouse

    Field, Edward H.; Gupta, Vipin

    2008-01-01

    This appendix presents elastic-rebound-theory (ERT) motivated time-dependent probabilities, conditioned on the date of last earthquake, for the segmented type-A fault models of the 2007 Working Group on California Earthquake Probabilities (WGCEP). These probabilities are included as one option in the WGCEP?s Uniform California Earthquake Rupture Forecast 2 (UCERF 2), with the other options being time-independent Poisson probabilities and an ?Empirical? model based on observed seismicity rate changes. A more general discussion of the pros and cons of all methods for computing time-dependent probabilities, as well as the justification of those chosen for UCERF 2, are given in the main body of this report (and the 'Empirical' model is also discussed in Appendix M). What this appendix addresses is the computation of conditional, time-dependent probabilities when both single- and multi-segment ruptures are included in the model. Computing conditional probabilities is relatively straightforward when a fault is assumed to obey strict segmentation in the sense that no multi-segment ruptures occur (e.g., WGCEP (1988, 1990) or see Field (2007) for a review of all previous WGCEPs; from here we assume basic familiarity with conditional probability calculations). However, and as we?ll see below, the calculation is not straightforward when multi-segment ruptures are included, in essence because we are attempting to apply a point-process model to a non point process. The next section gives a review and evaluation of the single- and multi-segment rupture probability-calculation methods used in the most recent statewide forecast for California (WGCEP UCERF 1; Petersen et al., 2007). We then present results for the methodology adopted here for UCERF 2. We finish with a discussion of issues and possible alternative approaches that could be explored and perhaps applied in the future. A fault-by-fault comparison of UCERF 2 probabilities with those of previous studies is given in the main part of this report.

  12. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Inferring ecological relationships from occupancy patterns for California Black Rails in the Sierra Nevada foothills

    NASA Astrophysics Data System (ADS)

    Richmond, Orien Manu Wright

    The secretive California Black Rail (Laterallus jamaicensis coturniculus ) has a disjunct and poorly understood distribution. After a new population was discovered in Yuba County in 1994, we conducted call playback surveys from 1994--2006 in the Sierra foothills and Sacramento Valley region to determine the distribution and residency of Black Rails, estimate densities, and obtain estimates of site occupancy and detection probability. We found Black Rails at 164 small, widely scattered marshes distributed along the lower western slopes of the Sierra Nevada foothills, from just northeast of Chico (Butte County) to Rocklin (Placer County). Marshes were surrounded by a matrix of unsuitable habitat, creating a patchy or metapopulation structure. We observed Black Rails nesting and present evidence that they are year-round residents. Assuming perfect detectability we estimated a lower-bound mean Black Rail density of 1.78 rails ha-1, and assuming a detection probability of 0.5 we estimated a mean density of 3.55 rails ha-1. We test if the presence of the larger Virginia Rail (Laterallus limicola) affects probabilities of detection or occupancy of the smaller California Black Rail in small freshwater marshes that range in size from 0.013-13.99 ha. We hypothesized that Black Rail occupancy should be lower in small marshes when Virginia Rails are present than when they are absent, because resources are presumably more limited and interference competition should increase. We found that Black Rail detection probability was unaffected by the detection of Virginia Rails, while, surprisingly, Black and Virginia Rail occupancy were positively associated even in small marshes. The average probability of Black Rail occupancy was higher when Virginia Rails were present (0.74 +/- 0.053) than when they were absent (0.36 +/- 0.069), and for both species occupancy increased with marsh size. We assessed the impact of winter (November-May) cattle grazing on occupancy of California Black Rails inhabiting a network of freshwater marshes in the northern Sierra Nevada foothills of California. As marsh birds are difficult to detect, we collected repeated presence/absence data via call playback surveys and used the "random changes in occupancy" parameterization of a multi-season occupancy model to examine relationships between occupancy and covariates, while accounting for detection probability. Wetland vegetation cover was significantly lower at winter-grazed sites than at ungrazed sites during the grazing season in 2007 but not in 2008. Winter grazing had little effect on Black Rail occupancy at irrigated marshes. However, at non-irrigated marshes fed by natural springs and streams, winter-grazed sites had lower occupancy than ungrazed sites, especially at larger marsh sizes (>0.5 ha). Black Rail occupancy was positively associated with marsh area, irrigation as a water source and summer cover, and negatively associated with isolation. We evaluate the performance of nine topographic features (aspect, downslope flow distance to streams, elevation, horizontal distance to sinks, horizontal distance to streams, plan curvature, profile curvature, slope and topographic wetness index) on freshwater wetland classification accuracy in the Sierra foothills of California. To evaluate object-based classification accuracy we test both within-image and between-image predictions using six different classification schemes (naive Bayes, the C4.5 decision tree classifier, k-nearest neighbors, boosted logistic regression, random forest, and a support vector machine classifier) in the classification software package Weka 3.6.2. Adding topographic features had mostly positive effects on classification accuracy for within-image tests, but mostly negative effects on accuracy for between-image tests. The topographic wetness index was the most beneficial topographic feature in both the within-image and between-image tests for distinguishing wetland objects from other "green" objects (irrigated pasture and woodland) and shadows. Our results suggest that there is a benefit to using a more complex index of topography than simple measures such as elevation for the goal of mapping small palustrine emergent wetlands, but this benefit, for the most part, has poor transferability when applied between image sections. (Abstract shortened by UMI.)

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Shih-Jung

    Dynamic strength of the High Flux Isotope Reactor (HFIR) vessel to resist hypothetical accidents is analyzed by using the method of fracture mechanics. Vessel critical stresses are estimated by applying dynamic pressure pulses of a range of magnitudes and pulse-durations. The pulses versus time functions are assumed to be step functions. The probability of vessel fracture is then calculated by assuming a distribution of possible surface cracks of different crack depths. The probability distribution function for the crack depths is based on the form that is recommended by the Marshall report. The toughness of the vessel steel used in themore » analysis is based on the projected and embrittled value after 10 effective full power years from 1986. From the study made by Cheverton, Merkle and Nanstad, the weakest point on the vessel for fracture evaluation is known to be located within the region surrounding the tangential beam tube HB3. The increase in the probability of fracture is obtained as an extension of the result from that report for the regular operating condition to include conditions of higher dynamic pressures due to accident loadings. The increase in the probability of vessel fracture is plotted for a range of hoop stresses to indicate the vessel strength against hypothetical accident conditions.« less

  15. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.

  16. Phase Transition for the Large-Dimensional Contact Process with Random Recovery Rates on Open Clusters

    NASA Astrophysics Data System (ADS)

    Xue, Xiaofeng

    2016-12-01

    In this paper we are concerned with the contact process with random recovery rates on open clusters of bond percolation on Z^d. Let ξ be a random variable such that P(ξ ≥ 1)=1, which ensures E1/ξ <+∞, then we assign i. i. d. copies of ξ on the vertices as the random recovery rates. Assuming that each edge is open with probability p and the infection can only spread through the open edges, then we obtain that limsup _{d→ +∞}λ _d≤ λ _c=1/pE{1}/{ξ}, where λ _d is the critical value of the process on Z^d, i.e., the maximum of the infection rates with which the infection dies out with probability one when only the origin is infected at t=0. To prove the above main result, we show that the following phase transition occurs. Assuming that lceil log drceil vertices are infected at t=0, where these vertices can be located anywhere, then when the infection rate λ >λ _c, the process survives with high probability as d→ +∞ while when λ <λ _c, the process dies out at time O(log d) with high probability.

  17. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  18. Viking Doppler noise used to determine the radial dependence of electron density in the extended corona

    NASA Technical Reports Server (NTRS)

    Berman, A. L.; Wackley, J. A.; Rockwell, S. T.; Kwan, M.

    1977-01-01

    The common form for radial dependence of electron density in the extended corona is given. By assuming proportionality between Doppler noise and integrated signal path electron density, Viking Doppler noise can be used to solve for a numerical value of X.

  19. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  20. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  1. A collision probability analysis of the double-heterogeneity problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hebert, A.

    1993-10-01

    A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less

  2. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  3. On the distinction between open and closed economies.

    PubMed Central

    Timberlake, W; Peden, B F

    1987-01-01

    Open and closed economies have been assumed to produce opposite relations between responding and the programmed density of reward (the amount of reward divided by its cost). Experimental procedures that are treated as open economies typically dissociate responding and total reward by providing supplemental income outside the experimental session; procedures construed as closed economies do not. In an open economy responding is assumed to be directly related to reward density, whereas in a closed economy responding is assumed to be inversely related to reward density. In contrast to this predicted correlation between response-reward relations and type of economy, behavior regulation theory predicts both direct and inverse relations in both open and closed economies. Specifically, responding should be a bitonic function of reward density regardless of the type of economy and is dependent only on the ratio of the schedule terms rather than on their absolute size. These predictions were tested by four experiments in which pigeons' key pecking produced food on fixed-ratio and variable-interval schedules over a range of reward magnitudes and under several open- and closed-economy procedures. The results better supported the behavior regulation view by showing a general bitonic function between key pecking and food density in all conditions. In most cases, the absolute size of the schedule requirement and the magnitude of reward had no effect; equal ratios of these terms produced approximately equal responding. PMID:3625103

  4. The effect of random matter density perturbations on the large mixing angle solution to the solar neutrino problem

    NASA Astrophysics Data System (ADS)

    Guzzo, M. M.; Holanda, P. C.; Reggiani, N.

    2003-08-01

    The neutrino energy spectrum observed in KamLAND is compatible with the predictions based on the Large Mixing Angle realization of the MSW (Mikheyev-Smirnov-Wolfenstein) mechanism, which provides the best solution to the solar neutrino anomaly. From the agreement between solar neutrino data and KamLAND observations, we can obtain the best fit values of the mixing angle and square difference mass. When doing the fitting of the MSW predictions to the solar neutrino data, it is assumed the solar matter do not have any kind of perturbations, that is, it is assumed the the matter density monothonically decays from the center to the surface of the Sun. There are reasons to believe, nevertheless, that the solar matter density fluctuates around the equilibrium profile. In this work, we analysed the effect on the Large Mixing Angle parameters when the density matter randomically fluctuates around the equilibrium profile, solving the evolution equation in this case. We find that, in the presence of these density perturbations, the best fit values of the mixing angle and the square difference mass assume smaller values, compared with the values obtained for the standard Large Mixing Angle Solution without noise. Considering this effect of the random perturbations, the lowest island of allowed region for KamLAND spectral data in the parameter space must be considered and we call it very-low region.

  5. Liquefaction Hazard Maps for Three Earthquake Scenarios for the Communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos, Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale, Northern Santa Clara County, California

    USGS Publications Warehouse

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.

    2008-01-01

    Maps showing the probability of surface manifestations of liquefaction in the northern Santa Clara Valley were prepared with liquefaction probability curves. The area includes the communities of San Jose, Campbell, Cupertino, Los Altos, Los Gatos Milpitas, Mountain View, Palo Alto, Santa Clara, Saratoga, and Sunnyvale. The probability curves were based on complementary cumulative frequency distributions of the liquefaction potential index (LPI) for surficial geologic units in the study area. LPI values were computed with extensive cone penetration test soundings. Maps were developed for three earthquake scenarios, an M7.8 on the San Andreas Fault comparable to the 1906 event, an M6.7 on the Hayward Fault comparable to the 1868 event, and an M6.9 on the Calaveras Fault. Ground motions were estimated with the Boore and Atkinson (2008) attenuation relation. Liquefaction is predicted for all three events in young Holocene levee deposits along the major creeks. Liquefaction probabilities are highest for the M7.8 earthquake, ranging from 0.33 to 0.37 if a 1.5-m deep water table is assumed, and 0.10 to 0.14 if a 5-m deep water table is assumed. Liquefaction probabilities of the other surficial geologic units are less than 0.05. Probabilities for the scenario earthquakes are generally consistent with observations during historical earthquakes.

  6. Stand Density and Canopy Gaps

    Treesearch

    Boris Zeide

    2004-01-01

    Estimation of stand density is based on a relationship between number of trees and their average diameter in fully stocked stands. Popular measures of density (Reineke’s stand density index and basal area) assume that number of trees decreases as a power function of diameter. Actually, number of trees drops faster than predicted by the power function because the number...

  7. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  8. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  9. Solar wind conditions in the outer heliosphere and the distance to the termination shock

    NASA Technical Reports Server (NTRS)

    Belcher, John W.; Lazarus, Alan J.; Mcnutt, Ralph L., Jr.; Gordon, George S., Jr.

    1993-01-01

    The Plasma Science experiment on the Voyager 2 spacecraft has measured the properties of solar wind protons from 1 to 40.4 AU. We use these observations to discuss the probable location and motion of the termination shock of the solar wind. Assuming that the interstellar pressure is due to a 5 micro-G magnetic field draped over the upstream face of the heliopause, the radial variation of ram pressure implies that the termination shock will be located at an average distance near 89 AU. This distance scales inversely as the assumed field strength. There are also large variations in ram pressure on time scales of tens of days, due primarily to large variations in solar wind density at a given radius. Such rapid changes in the solar wind ram pressure can cause large perturbations in the location of the termination shock. We study the nonequilibrium location of the termination shock as it responds to these ram pressure changes. The results of this study suggest that the position of the termination shock can vary by as much as 10 AU in a single year, depending on the nature of variations in the ram pressure, and that multiple crossings of the termination shock by a given outer heliosphere spacecraft are likely. After the first crossing, such models of shock motion will be useful for predicting the timing of subsequent crossings.

  10. NuSTAR Observations of Water Megamaser AGN

    NASA Technical Reports Server (NTRS)

    Masini, A.; Comastri, A.; Balokvic, M.; Zaw, I.; Puccetti, S.; Ballantyne, D. R.; Bauer, F. E.; Boggs, S. E.; Brandt, W. N.; Zhang, William W.

    2016-01-01

    Aims. We study the connection between the masing disk and obscuring torus in Seyfert 2 galaxies. Methods. We present a uniform X-ray spectral analysis of the high energy properties of 14 nearby megamaser active galactic nuclei observed by NuSTAR. We use a simple analytical model to localize the maser disk and understand its connection with the torus by combining NuSTAR spectral parameters with the available physical quantities from VLBI mapping.Results. Most of the sources that we analyzed are heavily obscured, showing a column density in excess of approx.10(exp 23) cm(exp -2); in particular, 79% are Compton-thick [NH is greater than 1.5 x 10(exp 24) cm(exp -2)]. When using column densities measured by NuSTAR with the assumption that the torus is the extension of the maser disk, and further assuming a reasonable density profile, we can predict the torus dimensions. They are found to be consistent with mid-IR interferometry parsec-scale observations of Circinus and NGC 1068. In this picture, the maser disk is intimately connected to the inner part of the torus. It is probably made of a large number of molecular clouds that connect the torus and the outer part of the accretion disk, giving rise to a thin disk rotating in most cases in Keplerian or sub-Keplerian motion. This toy model explains the established close connection between water megamaser emission and nuclear obscuration as a geometric effect.

  11. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  12. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  13. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  14. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  15. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  16. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  17. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    PubMed

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  18. Continued-fraction representation of the Kraus map for non-Markovian reservoir damping

    NASA Astrophysics Data System (ADS)

    van Wonderen, A. J.; Suttorp, L. G.

    2018-04-01

    Quantum dissipation is studied for a discrete system that linearly interacts with a reservoir of harmonic oscillators at thermal equilibrium. Initial correlations between system and reservoir are assumed to be absent. The dissipative dynamics as determined by the unitary evolution of system and reservoir is described by a Kraus map consisting of an infinite number of matrices. For all Laplace-transformed Kraus matrices exact solutions are constructed in terms of continued fractions that depend on the pair correlation functions of the reservoir. By performing factorizations in the Kraus map a perturbation theory is set up that conserves in arbitrary perturbative order both positivity and probability of the density matrix. The latter is determined by an integral equation for a bitemporal matrix and a finite hierarchy for Kraus matrices. In the lowest perturbative order this hierarchy reduces to one equation for one Kraus matrix. Its solution is given by a continued fraction of a much simpler structure as compared to the non-perturbative case. In the lowest perturbative order our non-Markovian evolution equations are applied to the damped Jaynes–Cummings model. From the solution for the atomic density matrix it is found that the atom may remain in the state of maximum entropy for a significant time span that depends on the initial energy of the radiation field.

  19. H I-to-H2 Transition Layers in the Star-forming Region W43

    NASA Astrophysics Data System (ADS)

    Bialy, Shmuel; Bihr, Simon; Beuther, Henrik; Henning, Thomas; Sternberg, Amiel

    2017-02-01

    The process of atomic-to-molecular (H I-to-H2) gas conversion is fundamental for molecular-cloud formation and star formation. 21 cm observations of the star-forming region W43 revealed extremely high H I column densities, of 120-180 {M}⊙ {{pc}}-2, a factor of 10-20 larger than predicted by H I-to-H2 transition theories. We analyze the observed H I with a theoretical model of the H I-to-H2 transition, and show that the discrepancy between theory and observation cannot be explained by the intense radiation in W43, nor be explained by variations of the assumed volume density or H2 formation rate coefficient. We show that the large observed H I columns are naturally explained by several (9-22) H I-to-H2 transition layers, superimposed along the sightlines of W43. We discuss other possible interpretations such as a non-steady-state scenario and inefficient dust absorption. The case of W43 suggests that H I thresholds reported in extragalactic observations are probably not associated with a single H I-to-H2 transition, but are rather a result of several transition layers (clouds) along the sightlines, beam-diluted with diffuse intercloud gas.

  20. Modified Spectral Fatigue Methods for S-N Curves With MIL-HDBK-5J Coefficients

    NASA Technical Reports Server (NTRS)

    Irvine, Tom; Larsen, Curtis

    2016-01-01

    The rainflow method is used for counting fatigue cycles from a stress response time history, where the fatigue cycles are stress-reversals. The rainflow method allows the application of Palmgren-Miner's rule in order to assess the fatigue life of a structure subject to complex loading. The fatigue damage may also be calculated from a stress response power spectral density (PSD) using the semi-empirical Dirlik, Single Moment, Zhao-Baker and other spectral methods. These methods effectively assume that the PSD has a corresponding time history which is stationary with a normal distribution. This paper shows how the probability density function for rainflow stress cycles can be extracted from each of the spectral methods. This extraction allows for the application of the MIL-HDBK-5J fatigue coefficients in the cumulative damage summation. A numerical example is given in this paper for the stress response of a beam undergoing random base excitation, where the excitation is applied separately by a time history and by its corresponding PSD. The fatigue calculation is performed in the time domain, as well as in the frequency domain via the modified spectral methods. The result comparison shows that the modified spectral methods give comparable results to the time domain rainflow counting method.

  1. On the distribution of career longevity and the evolution of home-run prowess in professional baseball

    NASA Astrophysics Data System (ADS)

    Petersen, Alexander M.; Jung, Woo-Sung; Stanley, H. Eugene

    2008-09-01

    Statistical analysis is a major aspect of baseball, from player averages to historical benchmarks and records. Much of baseball fanfare is based around players exceeding the norm, some in a single game and others over a long career. Career statistics serve as a metric for classifying players and establishing their historical legacy. However, the concept of records and benchmarks assumes that the level of competition in baseball is stationary in time. Here we show that power law probability density functions, a hallmark of many complex systems that are driven by competition, govern career longevity in baseball. We also find similar power laws in the density functions of all major performance metrics for pitchers and batters. The use of performance-enhancing drugs has a dark history, emerging as a problem for both amateur and professional sports. We find statistical evidence consistent with performance-enhancing drugs in the analysis of home runs hit by players in the last 25 years. This is corroborated by the findings of the Mitchell Report (2007), a two-year investigation into the use of illegal steroids in Major League Baseball, which recently revealed that over 5 percent of Major League Baseball players tested positive for performance-enhancing drugs in an anonymous 2003 survey.

  2. Active Brownian particles with velocity-alignment and active fluctuations

    NASA Astrophysics Data System (ADS)

    Großmann, R.; Schimansky-Geier, L.; Romanczuk, P.

    2012-07-01

    We consider a model of active Brownian particles (ABPs) with velocity alignment in two spatial dimensions with passive and active fluctuations. Here, active fluctuations refers to purely non-equilibrium stochastic forces correlated with the heading of an individual active particle. In the simplest case studied here, they are assumed to be independent stochastic forces parallel (speed noise) and perpendicular (angular noise) to the velocity of the particle. On the other hand, passive fluctuations are defined by a noise vector independent of the direction of motion of a particle, and may account, for example, for thermal fluctuations. We derive a macroscopic description of the ABP gas with velocity-alignment interaction. Here, we start from the individual-based description in terms of stochastic differential equations (Langevin equations) and derive equations of motion for the coarse-grained kinetic variables (density, velocity and temperature) via a moment expansion of the corresponding probability density function. We focus here on the different impact of active and passive fluctuations on onset of collective motion and show how active fluctuations in the active Brownian dynamics can change the phase-transition behaviour of the system. In particular, we show that active angular fluctuations lead to an earlier breakdown of collective motion and to the emergence of a new bistable regime in the mean-field case.

  3. Laboratory Characterization of Gray Masonry Concrete

    DTIC Science & Technology

    2007-08-01

    Based on the appropriate values of posttest water content, wet density, and an assumed grain density of 2.61 Mg/m3, values of dry density, porosity...velocity measurements were performed on each specimen. The TXC tests exhibited a continuous increase in maximum principal stress difference with...14 Figure 3. Spring-arm lateral deformeter mounted on test

  4. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  5. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  6. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  7. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  8. Generating probabilistic Boolean networks from a prescribed transition probability matrix.

    PubMed

    Ching, W-K; Chen, X; Tsing, N-K

    2009-11-01

    Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.

  9. 40 CFR 89.424 - Dilute emission sampling calculations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emission level (HC, CO, CO2, PM, or NOX) in g/kW-hr. gi = Mass flow in grams per hour, = grams measured...= Hydrocarbon emissions, in grams per test mode. Density HC= Density of hydrocarbons is (.5800 kg/m3) for #1... emissions, in grams per test mode. Density NO 2= Density of oxides of nitrogen is 1.913 kg/m3, assuming they...

  10. 40 CFR 89.424 - Dilute emission sampling calculations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... emission level (HC, CO, CO2, PM, or NOX) in g/kW-hr. gi = Mass flow in grams per hour, = grams measured...= Hydrocarbon emissions, in grams per test mode. Density HC= Density of hydrocarbons is (.5800 kg/m3) for #1... emissions, in grams per test mode. Density NO 2= Density of oxides of nitrogen is 1.913 kg/m3, assuming they...

  11. 40 CFR 89.424 - Dilute emission sampling calculations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... emission level (HC, CO, CO2, PM, or NOX) in g/kW-hr. gi = Mass flow in grams per hour, = grams measured...= Hydrocarbon emissions, in grams per test mode. Density HC= Density of hydrocarbons is (.5800 kg/m3) for #1... emissions, in grams per test mode. Density NO 2= Density of oxides of nitrogen is 1.913 kg/m3, assuming they...

  12. Time of arrival in quantum and Bohmian mechanics

    NASA Astrophysics Data System (ADS)

    Leavens, C. R.

    1998-08-01

    In a recent paper Grot, Rovelli, and Tate (GRT) [Phys. Rev. A 54, 4676 (1996)] derived an expression for the probability distribution π(TX) of intrinsic arrival times T(X) at position x=X for a quantum particle with initial wave function ψ(x,t=0) freely evolving in one dimension. This was done by quantizing the classical expression for the time of arrival of a free particle at X, assuming a particular choice of operator ordering, and then regulating the resulting time of arrival operator. For the special case of a minimum-uncertainty-product wave packet at t=0 with average wave number and variance Δk they showed that their analytical expression for π(TX) agreed with the probability current density J(x=X,t=T) only to terms of order Δk/. They dismissed the probability current density as a viable candidate for the exact arrival time distribution on the grounds that it can sometimes be negative. This fact is not a problem within Bohmian mechanics where the arrival time distribution for a particle, either free or in the presence of a potential, is rigorously given by \\|J(X,T)\\| (suitably normalized) [W. R. McKinnon and C. R. Leavens, Phys. Rev. A 51, 2748 (1995); C. R. Leavens, Phys. Lett. A 178, 27 (1993); M. Daumer et al., in On Three Levels: The Mathematical Physics of Micro-, Meso-, and Macro-Approaches to Physics, edited by M. Fannes et al. (Plenum, New York, 1994); M. Daumer, in Bohmian Mechanics and Quantum Theory: An Appraisal, edited by J. T. Cushing et al. (Kluwer Academic, Dordrecht, 1996)]. The two theories are compared in this paper and a case presented for which the results could not differ more: According to GRT's theory, every particle in the ensemble reaches a point x=X, where ψ(x,t) and J(x,t) are both zero for all t, while no particle ever reaches X according to the theory based on Bohmian mechanics. Some possible implications are discussed.

  13. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  14. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  15. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices

    NASA Astrophysics Data System (ADS)

    Gandhi, Om P.; Kang, Gang

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  16. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices.

    PubMed

    Gandhi, O P; Kang, G

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  17. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  18. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  19. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  20. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  1. Modeling Neutral Densities Downstream of a Gridded Ion Thruster

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2010-01-01

    The details of a model for determining the neutral density downstream of a gridded ion thruster are presented. An investigation of the possible sources of neutrals emanating from and surrounding a NEXT ion thruster determined that the most significant contributors to the downstream neutral density include discharge chamber neutrals escaping through the perforated grids, neutrals escaping from the neutralizer, and vacuum facility background neutrals. For the neutral flux through the grids, near- and far-field equations are presented for rigorously determining the neutral density downstream of a cylindrical aperture. These equations are integrated into a spherically-domed convex grid geometry with a hexagonal array of apertures for determining neutral densities downstream of the ion thruster grids. The neutrals escaping from an off-center neutralizer are also modeled assuming diffuse neutral emission from the neutralizer keeper orifice. Finally, the effect of the surrounding vacuum facility neutrals is included and assumed to be constant. The model is used to predict the neutral density downstream of a NEXT ion thruster with and without neutralizer flow and a vacuum facility background pressure. The impacts of past simplifying assumptions for predicting downstream neutral densities are also examined for a NEXT ion thruster.

  2. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  3. Partially coherent surface plasmon modes

    NASA Astrophysics Data System (ADS)

    Niconoff, G. M.; Vara, P. M.; Munoz-Lopez, J.; Juárez-Morales, J. C.; Carbajal-Dominguez, A.

    2011-04-01

    Elementary long-range plasmon modes are described assuming an exponential dependence of the refractive index in the neighbourhood of the interface dielectric-metal thin film. The study is performed using coupling mode theory. The interference between two long-range plasmon modes generated that way allows the synthesis of surface sinusoidal plasmon modes, which can be considered as completely coherent generalized plasmon modes. These sinusoidal plasmon modes are used for the synthesis of new partially coherent surface plasmon modes, which are obtained by means of an incoherent superposition of sinusoidal plasmon modes where the period of each one is considered as a random variable. The kinds of surface modes generated have an easily tuneable profile controlled by means of the probability density function associated to the period. We show that partially coherent plasmon modes have the remarkable property to control the length of propagation which is a notable feature respect to the completely coherent surface plasmon mode. The numerical simulation for sinusoidal, Bessel, Gaussian and Dark Hollow plasmon modes are presented.

  4. Statistical Nature of Atomic Disorder in Irradiated Crystals.

    PubMed

    Boulle, A; Debelle, A

    2016-06-17

    Atomic disorder in irradiated materials is investigated by means of x-ray diffraction, using cubic SiC single crystals as a model material. It is shown that, besides the determination of depth-resolved strain and damage profiles, x-ray diffraction can be efficiently used to determine the probability density function (PDF) of the atomic displacements within the crystal. This task is achieved by analyzing the diffraction-order dependence of the damage profiles. We thereby demonstrate that atomic displacements undergo Lévy flights, with a displacement PDF exhibiting heavy tails [with a tail index in the γ=0.73-0.37 range, i.e., far from the commonly assumed Gaussian case (γ=2)]. It is further demonstrated that these heavy tails are crucial to account for the amorphization kinetics in SiC. From the retrieved displacement PDFs we introduce a dimensionless parameter f_{D}^{XRD} to quantify the disordering. f_{D}^{XRD} is found to be consistent with both independent measurements using ion channeling and with molecular dynamics calculations.

  5. The ROSAT Field Sources --- What are they?

    NASA Astrophysics Data System (ADS)

    Caillault, J.-P.; Briceno, C.; Martin, E. L.; Palla, F.; Wichmann, R.

    Recent studies using the ROSAT All-Sky Survey towards nearby star-forming regions have identified a widely dispersed population of X-ray active stars and have suggested that these objects are older PMS stars located far from molecular clouds. Another group, however, has presented a simple model assuming continuing star formation over the past 10^8 yrs that quantitatively reproduces the number, surface density, X-ray emission, and optical properties of the RASS sources, leading to the argument that these stars are not PMS stars, but young MS stars of ages up to approximately 10^8 yrs. A third party notes that the similarity between molecular cloud lifetimes and the ambipolar diffusion timescale implies that star formation does not take place instantaneously, nor at a constant rate. They thus argue that the probability of finding a large population of old stars in a star-forming region is intrinsically very small and that the post-T Tauri problem is by and large nonexistent.

  6. A mathematical characterization of vegetation effect on microwave remote sensing from the Earth

    NASA Technical Reports Server (NTRS)

    Choe, Y.; Tsang, L.

    1983-01-01

    In passive microwave remote sensing of the earth, a theoretical model that utilizes the radiative transfer equations was developed to account for the volume scattering effects of the vegetation canopy. Vegetation canopies such as alfalfa, sorghum, and corn are simulated by a layer of ellipsoidal scatterers and cylindrical structures. The ellipsoidal scatterers represent the leaves of vegetation and are randomly positioned and oriented. The orientation of ellipsoids is characterized by a probability density function of Eulerian angles of rotation. The cylindrical structures represent the stalks of vegetation and their radii are assumed to be much smaller than their lengths. The underlying soil is represented by a half-space medium with a homogeneous permittivity and uniform temperature profile. The radiative transfer quations are solved by a numerical method using a Gaussian quadrature formula to compute both the vertical and horizontal polarized brightness temperature as a function of observation angle. The theory was applied to the interpretation of experimental data obtained from sorghum covered fields near College Station, Texas.

  7. Some aspects of the cosmogonic outward migration of Neptune. Co-planar migration

    NASA Astrophysics Data System (ADS)

    Neslušan, L.; Jakubík, M.

    2013-10-01

    Considering a simple model of the cosmogonic outward migration of Neptune, we investigate if the assumption of an extremely low orbital inclination of small bodies in a once-existing proto-planetary disk could influence the structure of reservoirs of the objects in the trans-Neptunian region. We found no significant influence. Our models predict only the existence of the mean-motion resonances (MMRs) with Neptune 2:3, 3:5, 1:2, and an anemic scattered disk (MMRs 3:4, 5:7, and 9:11 are also indicated). To explain the classical Edgeworth-Kuiper belt, relatively abundant 4:7 and 2:5 MMRs, and the more numerous scattered disk, we need to assume that, e.g., the outer boundary of the original proto-planetary disk considerably exceeded the distance of the current Neptune's orbit (Neptune probably ended its migration at the distance, where the disk's density started to be sub-critical), or that some Pluto-sized objects resided inside the MMRs and in the distant parts of the original proto-planetary disk.

  8. Mesoscopic fluctuations and intermittency in aging dynamics

    NASA Astrophysics Data System (ADS)

    Sibani, P.

    2006-01-01

    Mesoscopic aging systems are characterized by large intermittent noise fluctuations. In a record dynamics scenario (Sibani P. and Dall J., Europhys. Lett., 64 (2003) 8) these events, quakes, are treated as a Poisson process with average αln (1 + t/tw), where t is the observation time, tw is the age and α is a parameter. Assuming for simplicity that quakes constitute the only source of de-correlation, we present a model for the probability density function (PDF) of the configuration autocorrelation function. Beside α, the model has the average quake size 1/q as a parameter. The model autocorrelation PDF has a Gumbel-like shape, which approaches a Gaussian for large t/tw and becomes sharply peaked in the thermodynamic limit. Its average and variance, which are given analytically, depend on t/tw as a power law and a power law with a logarithmic correction, respectively. Most predictions are in good agreement with data from the literature and with the simulations of the Edwards-Anderson spin-glass carried out as a test.

  9. Statistical Nature of Atomic Disorder in Irradiated Crystals

    NASA Astrophysics Data System (ADS)

    Boulle, A.; Debelle, A.

    2016-06-01

    Atomic disorder in irradiated materials is investigated by means of x-ray diffraction, using cubic SiC single crystals as a model material. It is shown that, besides the determination of depth-resolved strain and damage profiles, x-ray diffraction can be efficiently used to determine the probability density function (PDF) of the atomic displacements within the crystal. This task is achieved by analyzing the diffraction-order dependence of the damage profiles. We thereby demonstrate that atomic displacements undergo Lévy flights, with a displacement PDF exhibiting heavy tails [with a tail index in the γ =0.73 - 0.37 range, i.e., far from the commonly assumed Gaussian case (γ =2 )]. It is further demonstrated that these heavy tails are crucial to account for the amorphization kinetics in SiC. From the retrieved displacement PDFs we introduce a dimensionless parameter fDXRD to quantify the disordering. fDXRD is found to be consistent with both independent measurements using ion channeling and with molecular dynamics calculations.

  10. Information entropy and dark energy evolution

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Luongo, Orlando

    Here, the information entropy is investigated in the context of early and late cosmology under the hypothesis that distinct phases of universe evolution are entangled between them. The approach is based on the entangled state ansatz, representing a coarse-grained definition of primordial dark temperature associated to an effective entangled energy density. The dark temperature definition comes from assuming either Von Neumann or linear entropy as sources of cosmological thermodynamics. We interpret the involved information entropies by means of probabilities of forming structures during cosmic evolution. Following this recipe, we propose that quantum entropy is simply associated to the thermodynamical entropy and we investigate the consequences of our approach using the adiabatic sound speed. As byproducts, we analyze two phases of universe evolution: the late and early stages. To do so, we first recover that dark energy reduces to a pure cosmological constant, as zero-order entanglement contribution, and second that inflation is well-described by means of an effective potential. In both cases, we infer numerical limits which are compatible with current observations.

  11. Linear Classifier with Reject Option for the Detection of Vocal Fold Paralysis and Vocal Fold Edema

    NASA Astrophysics Data System (ADS)

    Kotropoulos, Constantine; Arce, Gonzalo R.

    2009-12-01

    Two distinct two-class pattern recognition problems are studied, namely, the detection of male subjects who are diagnosed with vocal fold paralysis against male subjects who are diagnosed as normal and the detection of female subjects who are suffering from vocal fold edema against female subjects who do not suffer from any voice pathology. To do so, utterances of the sustained vowel "ah" are employed from the Massachusetts Eye and Ear Infirmary database of disordered speech. Linear prediction coefficients extracted from the aforementioned utterances are used as features. The receiver operating characteristic curve of the linear classifier, that stems from the Bayes classifier when Gaussian class conditional probability density functions with equal covariance matrices are assumed, is derived. The optimal operating point of the linear classifier is specified with and without reject option. First results using utterances of the "rainbow passage" are also reported for completeness. The reject option is shown to yield statistically significant improvements in the accuracy of detecting the voice pathologies under study.

  12. On the constancy of the lunar cratering flux over the past 3.3 billion yr

    NASA Technical Reports Server (NTRS)

    Guinness, E. A.; Arvidson, R. E.

    1977-01-01

    Utilizing a method that minimizes random fluctuations in sampling crater populations, it can be shown that the ejecta deposit of Tycho, the floor of Copernicus, and the region surrounding the Apollo 12 landing site have incremental crater size-frequency distributions that can be expressed as log-log linear functions over the diameter range from 0.1 to 1 km. Slopes are indistinguishable for the three populations, probably indicating that the surfaces are dominated by primary craters. Treating the crater populations of Tycho, the floor of Copernicus, and Apollo 12 as primary crater populations contaminated, but not overwhelmed, with secondaries, allows an attempt at calibration of the post-heavy bombardment cratering flux. Using the age of Tycho as 109 m.y., Copernicus as 800 m.y., and Apollo 12 as 3.26 billion yr, there is no basis for assuming that the flux has changed over the past 3.3 billion yr. This result can be used for dating intermediate aged surfaces by crater density.

  13. Material dependence of 2H(d,p)3H cross section at the very low energies

    NASA Astrophysics Data System (ADS)

    Kılıç, Ali İhsan; Czerski, Konrad; Kuştan-Kılıç, Fadime; Targosz-Sleczka, Natalia; Weissbach, Daniel; Huke, Armin; Ruprecht, Götz

    2017-09-01

    Calculations of the material dependence of 2H(d,p)3H cross section and neutron-to-proton branching ratio of d+d reactions have been performed including a concept of the 0+ threshold single particle resonance. The resonance has been assumed to explain the enhanced electron screening effect observed in the d+d reaction for different metallic targets. Here, we have included interference effects between the flat and resonance part of the cross section, which allowed us to enlighten observed suppression of the neutron channel in some metals such as Sr and Li. Since the position of the resonance depends on the screening energy that strongly depends on the local electron density. The resonance width, observed for the d+d reactions in the very hygroscopic metals (Sr and Li) and therefore probably contaminated by oxides, should be much larger than for other metals. Thus, the interference term of the cross section depending on the total resonance width provides the material dependences.

  14. Survival of Escherichia coli under lethal heat stress by L-form conversion.

    PubMed

    Markova, Nadya; Slavchev, Georgi; Michailova, Lilia; Jourdanova, Mimi

    2010-06-09

    Transition of bacteria to cell wall deficient L-forms in response to stress factors has been assumed as a potential mechanism for survival of microbes under unfavorable conditions. In this article, we provide evidence of paradoxal survival through L-form conversion of E. coli high cell density population after lethal treatments (boiling or autoclaving). Light and transmission electron microscopy demonstrated conversion from classical rod to polymorphic L-form shape morphology and atypical growths of E. coli. Microcrystal formations observed at this stage were interpreted as being closely linked to the processes of L-form conversion and probably involved in the general phenomenon of protection against lethal environment. Identity of the morphologically modified L-forms as E. coli was verified by species specific DNA-based test. Our study might contribute to a better understanding of the L-form phenomenon and its importance for bacterial survival, as well as provoke reexamination of the traditional view of killing strategies against bacteria.

  15. Survival of Escherichia coli under lethal heat stress by L-form conversion

    PubMed Central

    Markova, Nadya; Slavchev, Georgi; Michailova, Lilia; Jourdanova, Mimi

    2010-01-01

    Transition of bacteria to cell wall deficient L-forms in response to stress factors has been assumed as a potential mechanism for survival of microbes under unfavorable conditions. In this article, we provide evidence of paradoxal survival through L-form conversion of E. coli high cell density population after lethal treatments (boiling or autoclaving). Light and transmission electron microscopy demonstrated conversion from classical rod to polymorphic L-form shape morphology and atypical growths of E. coli. Microcrystal formations observed at this stage were interpreted as being closely linked to the processes of L-form conversion and probably involved in the general phenomenon of protection against lethal environment. Identity of the morphologically modified L-forms as E. coli was verified by species specific DNA-based test. Our study might contribute to a better understanding of the L-form phenomenon and its importance for bacterial survival, as well as provoke reexamination of the traditional view of killing strategies against bacteria. PMID:20582223

  16. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  17. Modeling of Yb3+/Er3+-codoped microring resonators

    NASA Astrophysics Data System (ADS)

    Vallés, Juan A.; Gălătuş, Ramona

    2015-03-01

    The performance of a highly Yb3+/Er3+-codoped phosphate glass add-drop microring resonator is numerically analyzed. The model assumes resonant behaviour of both pump and signal powers and the dependences of pump intensity build-up inside the microring resonator and of the signal transfer functions to the device through and drop ports are evaluated. Detailed equations for the evolution of the rare-earth ions levels population densities and the propagation of the optical powers inside the microring resonator are included in the model. Moreover, due to the high dopant concentrations considered, the microscopic statistical formalism based on the statistical average of the excitation probability of the Er3+ ion in a microscopic level has been used to describe energy-transfer inter-atomic mechanisms. Realistic parameters and working conditions are used for the calculations. Requirements to achieve amplification and laser oscillation within these devices are obtainable as a function of rare earth ions concentration and coupling losses.

  18. A new subgrid-scale representation of hydrometeor fields using a multivariate PDF

    DOE PAGES

    Griffin, Brian M.; Larson, Vincent E.

    2016-06-03

    The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. In conclusion, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less

  19. Herbivore-Specific, Density-Dependent Induction of Plant Volatiles: Honest or “Cry Wolf” Signals?

    PubMed Central

    Shiojiri, Kaori; Ozawa, Rika; Kugimiya, Soichi; Uefune, Masayoshi; van Wijk, Michiel; Sabelis, Maurice W.; Takabayashi, Junji

    2010-01-01

    Plants release volatile chemicals upon attack by herbivorous arthropods. They do so commonly in a dose-dependent manner: the more herbivores, the more volatiles released. The volatiles attract predatory arthropods and the amount determines the probability of predator response. We show that seedlings of a cabbage variety (Brassica oleracea var. capitata, cv Shikidori) also show such a response to the density of cabbage white (Pieris rapae) larvae and attract more (naive) parasitoids (Cotesia glomerata) when there are more herbivores on the plant. However, when attacked by diamondback moth (Plutella xylostella) larvae, seedlings of the same variety (cv Shikidori) release volatiles, the total amount of which is high and constant and thus independent of caterpillar density, and naive parasitoids (Cotesia vestalis) of diamondback moth larvae fail to discriminate herbivore-rich from herbivore-poor plants. In contrast, seedlings of another cabbage variety of B. oleracea (var. acephala: kale) respond in a dose-dependent manner to the density of diamondback moth larvae and attract more parasitoids when there are more herbivores. Assuming these responses of the cabbage cultivars reflect behaviour of at least some genotypes of wild plants, we provide arguments why the behaviour of kale (B. oleracea var acephala) is best interpreted as an honest signaling strategy and that of cabbage cv Shikidori (B. oleracea var capitata) as a “cry wolf” signaling strategy, implying a conflict of interest between the plant and the enemies of its herbivores: the plant profits from being visited by the herbivore's enemies, but the latter would be better off by visiting other plants with more herbivores. If so, evolutionary theory on alarm signaling predicts consequences of major interest to students of plant protection, tritrophic systems and communication alike. PMID:20808961

  20. MODELING THE ANOMALY OF SURFACE NUMBER DENSITIES OF GALAXIES ON THE GALACTIC EXTINCTION MAP DUE TO THEIR FIR EMISSION CONTAMINATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashiwagi, Toshiya; Suto, Yasushi; Taruya, Atsushi

    The most widely used Galactic extinction map is constructed assuming that the observed far-infrared (FIR) fluxes come entirely from Galactic dust. According to the earlier suggestion by Yahata et al., we consider how FIR emission of galaxies affects the SFD map. We first compute the surface number density of Sloan Digital Sky Survey (SDSS) DR7 galaxies as a function of the r-band extinction, A {sub r,} {sub SFD}. We confirm that the surface densities of those galaxies positively correlate with A {sub r,} {sub SFD} for A {sub r,} {sub SFD} < 0.1, as first discovered by Yahata et al.more » for SDSS DR4 galaxies. Next we construct an analytical model to compute the surface density of galaxies, taking into account the contamination of their FIR emission. We adopt a log-normal probability distribution for the ratio of 100 μm and r-band luminosities of each galaxy, y ≡ (νL){sub 100} {sub μm}/(νL) {sub r}. Then we search for the mean and rms values of y that fit the observed anomaly, using the analytical model. The required values to reproduce the anomaly are roughly consistent with those measured from the stacking analysis of SDSS galaxies. Due to the limitation of our statistical modeling, we are not yet able to remove the FIR contamination of galaxies from the extinction map. Nevertheless, the agreement with the model prediction suggests that the FIR emission of galaxies is mainly responsible for the observed anomaly. Whereas the corresponding systematic error in the Galactic extinction map is 0.1-1 mmag, it is directly correlated with galaxy clustering and thus needs to be carefully examined in precision cosmology.« less

  1. Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria

    PubMed Central

    2010-01-01

    Background Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Methods Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. Results A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10), and an empirical patient category were the co-variates that were found to improve the model. Conclusions A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12), and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up. PMID:20187935

  2. Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria.

    PubMed

    Distiller, Greg B; Little, Francesca; Barnes, Karen I

    2010-02-26

    Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10), and an empirical patient category were the co-variates that were found to improve the model. A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12), and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up.

  3. A study of the region of massive star formation L379IRS1 in radio lines of methanol and other molecules

    NASA Astrophysics Data System (ADS)

    Kalenskii, S. V.; Shchurov, M. A.

    2016-04-01

    The results of spectral observations of the region of massive star formation L379IRS1 (IRAS18265-1517) are presented. The observations were carried out with the 30-m Pico Veleta radio telescope (Spain) at seven frequencies in the 1-mm, 2-mm, and 3-mm wavelength bands. Lines of 24 molecules were detected, from simple diatomic or triatomic species to complex eight- or nine-atom compounds such as CH3OCHO or CH3OCH3. Rotation diagrams constructed from methanol andmethyl cyanide lines were used to determine the temperature of the quiescent gas in this region, which is about 40-50 K. In addition to this warm gas, there is a hot component that is revealed through high-energy lines of methanol and methyl cyanide, molecular lines arising in hot regions, and the presence of H2O masers and Class II methanol masers at 6.7 GHz, which are also related to hot gas. One of the hot regions is probably a compact hot core, which is located near the southern submillimeter peak and is related to a group of methanol masers at 6.7 GHz. High-excitation lines at other positions may be associated with other hot cores or hot post-shock gas in the lobes of bipolar outflows. The rotation diagrams can be use to determine the column densities and abundances of methanol (10-9) and methyl cyanide (about 10-11) in the quiescent gas. The column densities of A- and E-methanol in L379IRS1 are essentually the same. The column densities of other observedmolecules were calculated assuming that the ratios of the molecular level abundances correspond to a temperature of 40 K. The molecular composition of the quiescent gas is close to that in another region of massive star formation, DR21(OH). The only appreciable difference is that the column density of SO2 in L379IRS1 is at least a factor of 20 lower than the value in DR21(OH). The SO2/CS and SO2/OCS abundance ratios, which can be used as chemical clocks, are lower in L379IRS1 than in DR21(OH), suggesting that L379IRS1 is probably younger than DR21(OH).

  4. The estimation of lower refractivity uncertainty from radar sea clutter using the Bayesian—MCMC method

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng

    2013-02-01

    The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.

  5. A survey of implicit particle filters for data assimilation [Implicit particle filters for data assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chorin, Alexandre J.; Morzfeld, Matthias; Tu, Xuemin

    Implicit particle filters for data assimilation update the particles by first choosing probabilities and then looking for particle locations that assume them, guiding the particles one by one to the high probability domain. We provide a detailed description of these filters, with illustrative examples, together with new, more general, methods for solving the algebraic equations and with a new algorithm for parameter identification.

  6. Implications of seed banking for recruitment of Southern Appalachian woody species

    Treesearch

    Janneke Hille Ris Lambers; James S. Clark; Michael Lavine

    2002-01-01

    Seed dormancy is assumed to be unimportant for population dynamics of temperate woody species, because seeds occur at low densities and are short lived in forest soils. However, low soil seed densities may result from low seed production, and even modest seed longevity can buffer against fluctuating seed production, potentially limiting density-dependent mortality and...

  7. Laboratory Characterization of Talley Brick

    DTIC Science & Technology

    2011-08-01

    specimen’s wet, bulk, or “as-tested” density. Results from these determinations are provided in Table 1. Measurements of posttest water content1...ASTM 2005d). Based on the appropriate values of posttest water content, wet density, and an assumed grain density of 2.89 Mg/m3, values of dry... Posttest Axial P Radial P Axial S Radial S Wet Water Dry Degree of ’Wave ’Wave ’Wave \\Vave Test Density Conte-nt, Density, Porosity, Saturation

  8. Applications of the Galton Watson process to human DNA evolution and demography

    NASA Astrophysics Data System (ADS)

    Neves, Armando G. M.; Moreira, Carlos H. C.

    2006-08-01

    We show that the problem of existence of a mitochondrial Eve can be understood as an application of the Galton-Watson process and presents interesting analogies with critical phenomena in Statistical Mechanics. In the approximation of small survival probability, and assuming limited progeny, we are able to find for a genealogic tree the maximum and minimum survival probabilities over all probability distributions for the number of children per woman constrained to a given mean. As a consequence, we can relate existence of a mitochondrial Eve to quantitative demographic data of early mankind. In particular, we show that a mitochondrial Eve may exist even in an exponentially growing population, provided that the mean number of children per woman Nbar is constrained to a small range depending on the probability p that a child is a female. Assuming that the value p≈0.488 valid nowadays has remained fixed for thousands of generations, the range where a mitochondrial Eve occurs with sizeable probability is 2.0492

  9. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  10. Entropy growth in emotional online dialogues

    NASA Astrophysics Data System (ADS)

    Sienkiewicz, J.; Skowron, M.; Paltoglou, G.; Hołyst, Janusz A.

    2013-02-01

    We analyze emotionally annotated massive data from IRC (Internet Relay Chat) and model the dialogues between its participants by assuming that the driving force for the discussion is the entropy growth of emotional probability distribution.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conn, A. R.; Parker, Q. A.; Zucker, D. B.

    In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less

  12. Behavioural flexibility in migratory behaviour in a long-lived large herbivore.

    PubMed

    Eggeman, Scott L; Hebblewhite, Mark; Bohm, Holger; Whittington, Jesse; Merrill, Evelyn H

    2016-05-01

    Migratory animals are predicted to enhance lifetime fitness by obtaining higher quality forage and/or reducing predation risk compared to non-migratory conspecifics. Despite evidence for behavioural flexibility in other taxa, previous research on large mammals has often assumed that migratory behaviour is a fixed behavioural trait. Migratory behaviour may be plastic for many species, although few studies have tested for individual-level flexibility using long-term monitoring of marked individuals, especially in large mammals such as ungulates. We tested variability in individual migratory behaviour using a 10-year telemetry data set of 223 adult female elk (Cervus elaphus) in the partially migratory Ya Ha Tinda population in Alberta, Canada. We used net squared displacement (NSD) to classify migratory strategy for each individual elk-year. Individuals switched between migrant and resident strategies at a mean rate of 15% per year, and migrants were more likely to switch than residents. We then tested how extrinsic (climate, elk/wolf abundance) and intrinsic (age) factors affected the probability of migrating, and, secondly, the decision to switch between migratory strategies. Over 630 individual elk-years, the probability of an individual elk migrating increased following a severe winter, in years of higher wolf abundance, and with increasing age. At an individual elk level, we observed 148 switching events of 430 possible transitions in elk monitored at least 2 years. We found switching was density-dependent, where migrants switched to a resident strategy at low elk abundance, but residents switched more to a migrant strategy at high elk abundance. Precipitation during the previous summer had a weak carryover effect, with migrants switching slightly more following wetter summers, whereas residents showed the opposite pattern. Older migrant elk rarely switched, whereas resident elk switched more frequently to migrate at older ages. Our results show migratory behaviour in ungulates is an individually variable trait that can respond to intrinsic, environmental and density-dependent forces. Different strategies had opposing responses to density-dependent and intrinsic drivers, providing a stabilizing mechanism for the maintenance of partial migration and demographic fitness in this population. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  13. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  14. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  15. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, J.; Tarter, J.

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  16. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy.

    PubMed

    Billingham, J; Tarter, J

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  17. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  18. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  19. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  20. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  1. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  2. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  3. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  4. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  5. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  6. A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys

    NASA Astrophysics Data System (ADS)

    Takeuchi, Tsutomu T.; Ishii, Takako T.

    2004-03-01

    Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.

  7. Description of atomic burials in compact globular proteins by Fermi-Dirac probability distributions.

    PubMed

    Gomes, Antonio L C; de Rezende, Júlia R; Pereira de Araújo, Antônio F; Shakhnovich, Eugene I

    2007-02-01

    We perform a statistical analysis of atomic distributions as a function of the distance R from the molecular geometrical center in a nonredundant set of compact globular proteins. The number of atoms increases quadratically for small R, indicating a constant average density inside the core, reaches a maximum at a size-dependent distance R(max), and falls rapidly for larger R. The empirical curves turn out to be consistent with the volume increase of spherical concentric solid shells and a Fermi-Dirac distribution in which the distance R plays the role of an effective atomic energy epsilon(R) = R. The effective chemical potential mu governing the distribution increases with the number of residues, reflecting the size of the protein globule, while the temperature parameter beta decreases. Interestingly, betamu is not as strongly dependent on protein size and appears to be tuned to maintain approximately half of the atoms in the high density interior and the other half in the exterior region of rapidly decreasing density. A normalized size-independent distribution was obtained for the atomic probability as a function of the reduced distance, r = R/R(g), where R(g) is the radius of gyration. The global normalized Fermi distribution, F(r), can be reasonably decomposed in Fermi-like subdistributions for different atomic types tau, F(tau)(r), with Sigma(tau)F(tau)(r) = F(r), which depend on two additional parameters mu(tau) and h(tau). The chemical potential mu(tau) affects a scaling prefactor and depends on the overall frequency of the corresponding atomic type, while the maximum position of the subdistribution is determined by h(tau), which appears in a type-dependent atomic effective energy, epsilon(tau)(r) = h(tau)r, and is strongly correlated to available hydrophobicity scales. Better adjustments are obtained when the effective energy is not assumed to be necessarily linear, or epsilon(tau)*(r) = h(tau)*r(alpha,), in which case a correlation with hydrophobicity scales is found for the product alpha(tau)h(tau)*. These results indicate that compact globular proteins are consistent with a thermodynamic system governed by hydrophobic-like energy functions, with reduced distances from the geometrical center, reflecting atomic burials, and provide a conceptual framework for the eventual prediction from sequence of a few parameters from which whole atomic probability distributions and potentials of mean force can be reconstructed. Copyright 2006 Wiley-Liss, Inc.

  8. Multi-level biological responses in Ucides cordatus (Linnaeus, 1763) (Brachyura, Ucididae) as indicators of conservation status in mangrove areas from the western atlantic.

    PubMed

    Duarte, Luis Felipe de Almeida; Souza, Caroline Araújo de; Nobre, Caio Rodrigues; Pereira, Camilo Dias Seabra; Pinheiro, Marcelo Antonio Amaro

    2016-11-01

    There is a global lack of knowledge on tropical ecotoxicology, particularly in terms of mangrove areas. These areas often serve as nurseries or homes for several animal species, including Ucides cordatus (the uçá crab). This species is widely distributed, is part of the diet of human coastal communities, and is considered to be a sentinel species due to its sensitivity to toxic xenobiotics in natural environments. Sublethal damages to benthic populations reveal pre-pathological conditions, but discussions of the implications are scarce in the literature. In Brazil, the state of São Paulo offers an interesting scenario for ecotoxicology and population studies: it is easy to distinguish between mangroves that are well preserved and those which are significantly impacted by human activity. The objectives of this study were to provide the normal baseline values for the frequency of Micronucleated cells (MN‰) and for neutral red retention time (NRRT) in U. cordatus at pristine locations, as well to indicate the conservation status of different mangrove areas using a multi-level biological response approach in which these biomarkers and population indicators (condition factor and crab density) are applied in relation to environmental quality indicators (determined via information in the literature and solid waste volume). A mangrove area with no effects of impact (areas of reference or pristine areas) presented a mean value of MN‰<3 and NRRT>120min, values which were assumed as baseline values representing genetic and physiological normality. A significant correlation was found between NRRT and MN, with both showing similar and effective results for distinguishing between different mangrove areas according to conservation status. Furthermore, crab density was lower in more impacted mangrove areas, a finding which also reflects the effects of sublethal damage; this finding was not determined by condition factor measurements. Multi-level biological responses were able to reflect the conservation status of the mangrove areas studied using information on guideline values of MN‰, NRRT, and density of the uçá crab in order to categorize three levels of human impacts in mangrove areas: PNI (probable null impact); PLI (probable low impact); and PHI (probable high impact). Results confirm the success of U. cordatus species' multi-level biological responses in diagnosing threats to mangrove areas. Therefore, this species represents an effective tool in studies on mangrove conservation statuses in the Western Atlantic. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  10. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  11. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  12. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  13. Dosimetric model for intraperitoneal targeted liposomal radioimmunotherapy of ovarian cancer micrometastases

    NASA Astrophysics Data System (ADS)

    Syme, A. M.; McQuarrie, S. A.; Middleton, J. W.; Fallone, B. G.

    2003-05-01

    A simple model has been developed to investigate the dosimetry of micrometastases in the peritoneal cavity during intraperitoneal targeted liposomal radioimmunotherapy. The model is applied to free-floating tumours with radii between 0.005 cm and 0.1 cm. Tumour dose is assumed to come from two sources: free liposomes in solution in the peritoneal cavity and liposomes bound to the surface of the micrometastases. It is assumed that liposomes do not penetrate beyond the surface of the tumours and that the total amount of surface antigen does not change over the course of treatment. Integrated tumour doses are expressed as a function of biological parameters that describe the rates at which liposomes bind to and unbind from the tumour surface, the rate at which liposomes escape from the peritoneal cavity and the tumour surface antigen density. Integrated doses are translated into time-dependent tumour control probabilities (TCPs). The results of the work are illustrated in the context of a therapy in which liposomes labelled with Re-188 are targeted at ovarian cancer cells that express the surface antigen CA-125. The time required to produce a TCP of 95% is used to investigate the importance of the various parameters. The relative contributions of surface-bound radioactivity and unbound radioactivity are used to assess the conditions required for a targeted approach to provide an improvement over a non-targeted approach during intraperitoneal radiation therapy. Using Re-188 as the radionuclide, the model suggests that, for microscopic tumours, the relative importance of the surface-bound radioactivity increases with tumour size. This is evidenced by the requirement for larger antigen densities on smaller tumours to affect an improvement in the time required to produce a TCP of 95%. This is because for the smallest tumours considered, the unbound radioactivity is often capable of exerting a tumouricidal effect before the targeting agent has time to accumulate significantly on the tumour surface.

  14. Testing anthropic reasoning for the cosmological constant with a realistic galaxy formation model

    NASA Astrophysics Data System (ADS)

    Sudoh, Takahiro; Totani, Tomonori; Makiya, Ryu; Nagashima, Masahiro

    2017-01-01

    The anthropic principle is one of the possible explanations for the cosmological constant (Λ) problem. In previous studies, a dark halo mass threshold comparable with our Galaxy must be assumed in galaxy formation to get a reasonably large probability of finding the observed small value, P(<Λobs), though stars are found in much smaller galaxies as well. Here we examine the anthropic argument by using a semi-analytic model of cosmological galaxy formation, which can reproduce many observations such as galaxy luminosity functions. We calculate the probability distribution of Λ by running the model code for a wide range of Λ, while other cosmological parameters and model parameters for baryonic processes of galaxy formation are kept constant. Assuming that the prior probability distribution is flat per unit Λ, and that the number of observers is proportional to stellar mass, we find P(<Λobs) = 6.7 per cent without introducing any galaxy mass threshold. We also investigate the effect of metallicity; we find P(<Λobs) = 9.0 per cent if observers exist only in galaxies whose metallicity is higher than the solar abundance. If the number of observers is proportional to metallicity, we find P(<Λobs) = 9.7 per cent. Since these probabilities are not extremely small, we conclude that the anthropic argument is a viable explanation, if the value of Λ observed in our Universe is determined by a probability distribution.

  15. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  16. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  17. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  18. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  19. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  20. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  1. Truncated Long-Range Percolation on Oriented Graphs

    NASA Astrophysics Data System (ADS)

    van Enter, A. C. D.; de Lima, B. N. B.; Valesin, D.

    2016-07-01

    We consider different problems within the general theme of long-range percolation on oriented graphs. Our aim is to settle the so-called truncation question, described as follows. We are given probabilities that certain long-range oriented bonds are open; assuming that the sum of these probabilities is infinite, we ask if the probability of percolation is positive when we truncate the graph, disallowing bonds of range above a possibly large but finite threshold. We give some conditions in which the answer is affirmative. We also translate some of our results on oriented percolation to the context of a long-range contact process.

  2. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  3. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  4. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  5. Effects of stand density on top height estimation for ponderosa pine

    Treesearch

    Martin Ritchie; Jianwei Zhang; Todd Hamilton

    2012-01-01

    Site index, estimated as a function of dominant-tree height and age, is often used as an expression of site quality. This expression is assumed to be effectively independent of stand density. Observation of dominant height at two different ponderosa pine levels-of-growing-stock studies revealed that top height stability with respect to stand density depends on the...

  6. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  7. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  8. Phase transitions in Nowak Sznajd opinion dynamics

    NASA Astrophysics Data System (ADS)

    Wołoszyn, Maciej; Stauffer, Dietrich; Kułakowski, Krzysztof

    2007-05-01

    The Nowak modification of the Sznajd opinion dynamics model on the square lattice assumes that with probability β the opinions flip due to mass-media advertising from down to up, and vice versa. Besides, with probability α the Sznajd rule applies that a neighbour pair agreeing in its two opinions convinces all its six neighbours of that opinion. Our Monte Carlo simulations and mean-field theory find sharp phase transitions in the parameter space.

  9. Redundant Sensors for Mobile Robot Navigation

    DTIC Science & Technology

    1985-09-01

    represent a probability that the area is empty, while positive numbers mcan it’s probably occupied. Zero reprtsents the unknown. The basic idea is that...room to give it absolute positioning information. This works by using two infrared emitters and detectors on the robot. Measurements of anglcs are made...meters (T in Kelvin) 273 sec Distances returned when assuming 80 degrees Farenheit , but where. actual temperature is 60 degrees, will be seven inches

  10. New Directions in Software Quality Assurance Automation

    DTIC Science & Technology

    2009-06-01

    generation process. 4.1 Parameterized Safety Analysis We can do a qualitative analysis as well and ask questions like “ what has contributed to this...the probability of interception p1 in the previous example, we can determine what impact those parameters have on the probability of hazardous...assumed that the AEG is traversed top-down and left-to-right and only once to produce a particular event trace Randomized decisions about what

  11. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  12. Comparison of the deep inferior epigastric perforator flap and free transverse rectus abdominis myocutaneous flap in postmastectomy reconstruction: a cost-effectiveness analysis.

    PubMed

    Thoma, Achilleas; Veltri, Karen; Khuthaila, Dana; Rockwell, Gloria; Duku, Eric

    2004-05-01

    This study compared the deep inferior epigastric perforator (DIEP) flap and the free transverse rectus abdominis myocutaneous (TRAM) flap in postmastectomy reconstruction using a cost-effectiveness analysis. A decision analytic model was used. Medical costs associated with the two techniques were estimated from the Ontario Ministry of Health Schedule of Benefits for 2002. Hospital costs were obtained from St. Joseph's Healthcare, a university teaching hospital in Hamilton, Ontario, Canada. The utilities of clinically important health states related to breast reconstruction were obtained from 32 "experts" across Canada and converted into quality-adjusted life years. The probabilities of these various clinically important health states being associated with the DIEP and free TRAM flaps were obtained after a thorough review of the literature. The DIEP flap was more costly than the free TRAM flap ($7026.47 versus $6508.29), but it provided more quality-adjusted life years than the free TRAM flap (28.88 years versus 28.53 years). The baseline incremental cost-utility ratio was $1464.30 per quality-adjusted life year, favoring adoption of the DIEP flap. Sensitivity analyses were performed by assuming that the probabilities of occurrence of hernia, abdominal bulging, total flap loss, operating room time, and hospital stay were identical with the DIEP and free TRAM techniques. By assuming that the probability of postoperative hernia for the DIEP flap increased from 0.008 to 0.054 (same as for TRAM flap), the incremental cost-utility ratio changed to $1435.00 per quality-adjusted life year. A sensitivity analysis was performed for the complication of hernia because the DIEP flap allegedly diminishes this complication. Increasing the probability of abdominal bulge from 0.041 to 0.103 for the DIEP flap changed the ratio to $2731.78 per quality-adjusted life year. When the probability of total flap failure was increased from 0.014 to 0.016, the ratio changed to $1384.01 per quality-adjusted life year. When the time in the operating room was assumed to be the same for both flaps, the ratio changed to $4026.57 per quality-adjusted life year. If the hospital stay was assumed to be the same for both flaps, the ratio changed to $1944.30 per quality-adjusted life year. On the basis of the baseline calculation and sensitivity analyses, the DIEP flap remained a cost-effective procedure. Thus, adoption of this new technique for postmastectomy reconstruction is warranted in the Canadian health care system.

  13. Evolution of Structure in the Intergalactic Medium and the Nature of the LY-Alpha Forest

    NASA Technical Reports Server (NTRS)

    Bi, Hongguang; Davidsen, Arthur F.

    1997-01-01

    We have performed a detailed statistical study of the evolution of structure in a photoionized intergalactic medium (IGM) using analytical simulations to extend the calculation into the mildly nonlinear density regime found to prevail at z = 3. Our work is based on a simple fundamental conjecture: that the probability distribution function of the density of baryonic diffuse matter in the universe is described by a lognormal (LN) random field. The LN distribution has several attractive features and follows plausibly from the assumption of initial linear Gaussian density and velocity fluctuations at arbitrarily early times. Starting with a suitably normalized power spectrum of primordial fluctuations in a universe dominated by cold dark matter (CDM), we compute the behavior of the baryonic matter, which moves slowly toward minima in the dark matter potential on scales larger than the Jeans length. We have computed two models that succeed in matching observations. One is a nonstandard CDM model with OMEGA = 1, h = 0.5, and GAMMA = 0.3, and the other is a low-density flat model with a cosmological constant (LCDM), with OMEGA = 0.4, OMEGA(sub LAMBDA) = 0.6, and h = 0.65. In both models, the variance of the density distribution function grows with time, reaching unity at about z = 4, where the simulation yields spectra that closely resemble the Ly-alpha forest absorption seen in the spectra of high-z quasars. The calculations also successfully predict the observed properties of the Ly-alpha forest clouds and their evolution from z = 4 down to at least z = 2, assuming a constant intensity for the metagalactic UV background over this redshift range. However, in our model the forest is not due to discrete clouds, but rather to fluctuations in a continuous intergalactic medium. At z = 3; typical clouds with measured neutral hydrogen column densities N(sub H I) = 10(exp 13.3), 10(exp 13.5), and 10(exp 11.5) /sq cm correspond to fluctuations with mean total densities approximately 10, 1, and 0.1 times the universal mean baryon density. Perhaps surprisingly, fluctuations whose amplitudes are less than or equal to the mean density still appear as "clouds" because in our model more than 70% of the volume of the IGM at z = 3 is filled with gas at densities below the mean value.

  14. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  15. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  16. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  17. Prediction Uncertainty and Groundwater Management: Approaches to get the Most out of Probabilistic Outputs

    NASA Astrophysics Data System (ADS)

    Peeters, L. J.; Mallants, D.; Turnadge, C.

    2017-12-01

    Groundwater impact assessments are increasingly being undertaken in a probabilistic framework whereby various sources of uncertainty (model parameters, model structure, boundary conditions, and calibration data) are taken into account. This has resulted in groundwater impact metrics being presented as probability density functions and/or cumulative distribution functions, spatial maps displaying isolines of percentile values for specific metrics, etc. Groundwater management on the other hand typically uses single values (i.e., in a deterministic framework) to evaluate what decisions are required to protect groundwater resources. For instance, in New South Wales, Australia, a nominal drawdown value of two metres is specified by the NSW Aquifer Interference Policy as trigger-level threshold. In many cases, when drawdowns induced by groundwater extraction exceed two metres, "make-good" provisions are enacted (such as the surrendering of extraction licenses). The information obtained from a quantitative uncertainty analysis can be used to guide decision making in several ways. Two examples are discussed here: the first of which would not require modification of existing "deterministic" trigger or guideline values, whereas the second example assumes that the regulatory criteria are also expressed in probabilistic terms. The first example is a straightforward interpretation of calculated percentile values for specific impact metrics. The second examples goes a step further, as the previous deterministic thresholds do not currently allow for a probabilistic interpretation; e.g., there is no statement that "the probability of exceeding the threshold shall not be larger than 50%". It would indeed be sensible to have a set of thresholds with an associated acceptable probability of exceedance (or probability of not exceeding a threshold) that decreases as the impact increases. We here illustrate how both the prediction uncertainty and management rules can be expressed in a probabilistic framework, using groundwater metrics derived for a highly stressed groundwater system.

  18. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  19. Semiclassical electron transport at the edge of a two-dimensional topological insulator: Interplay of protected and unprotected modes

    NASA Astrophysics Data System (ADS)

    Khalaf, E.; Skvortsov, M. A.; Ostrovsky, P. M.

    2016-03-01

    We study electron transport at the edge of a generic disordered two-dimensional topological insulator, where some channels are topologically protected from backscattering. Assuming the total number of channels is large, we consider the edge as a quasi-one-dimensional quantum wire and describe it in terms of a nonlinear sigma model with a topological term. Neglecting localization effects, we calculate the average distribution function of transmission probabilities as a function of the sample length. We mainly focus on the two experimentally relevant cases: a junction between two quantum Hall (QH) states with different filling factors (unitary class) and a relatively thick quantum well exhibiting quantum spin Hall (QSH) effect (symplectic class). In a QH sample, the presence of topologically protected modes leads to a strong suppression of diffusion in the other channels already at scales much shorter than the localization length. On the semiclassical level, this is accompanied by the formation of a gap in the spectrum of transmission probabilities close to unit transmission, thereby suppressing shot noise and conductance fluctuations. In the case of a QSH system, there is at most one topologically protected edge channel leading to weaker transport effects. In order to describe `topological' suppression of nearly perfect transparencies, we develop an exact mapping of the semiclassical limit of the one-dimensional sigma model onto a zero-dimensional sigma model of a different symmetry class, allowing us to identify the distribution of transmission probabilities with the average spectral density of a certain random-matrix ensemble. We extend our results to other symmetry classes with topologically protected edges in two dimensions.

  20. Seroprevalence of dengue in a rural and an urbanized village: A pilot study from rural western India.

    PubMed

    Shah, P S; Deoshatwar, A; Karad, S; Mhaske, S; Singh, A; Bachal, R V; Alagarasu, K; Padbidri, V S; Cecilia, D

    2017-01-01

    Dengue is highly prevalent in tropical and subtropical regions. The prevalence of dengue is influenced by number of factors, i.e. host, vector, virus and environmental conditions including urbanization and population density. A cross sectional study was undertaken to determine the seroprevalence of dengue in two selected villages that differed in the level of their urbanization and population density. Two villages with demographically well-defined populations close to Pune, a metropolitan city of western India, were selected for the study. Age stratified serosurvey was carried out during February to May 2011 in the two villages-a rural village A, located 6 km from the national highway with a population density of 159/km2 ; and an urbanized village B, located along the highway with a population density of 779/km2 . Assuming a low seroposi- tivity of 10%, 702 serum samples were collected from village A. Sample size for village B was calculated on the basis of seropositivity obtained in village A, and 153 samples were collected. Serum samples were tested for the presence of dengue virus (DENV)-specific IgG. Simple proportional analyses were used to calculate and compare the seroprevalence. Of the 702 samples collected from village A, 42.8% were found positive for anti-DENV IgG. A significantly higher seropositivity for DENV (58.8%) was found in village B. In village A, there was an age dependent increase in seroprevalence; whereas, in village B, there was a steep increase from 17% positivity in 0-10 yr age group to 72% in the 11-20 yr age group. The seroprevalence was almost similar in the older age groups. The observations suggested that prevalence of dengue is probably associated with urbanization and host population density. Areas that are in the process of urbanization needs to be monitored for prevalence of dengue and its vector, and appropriate vector control measures may be implemented.

  1. Modeling Percolation in Polymer Nanocomposites by Stochastic Microstructuring

    PubMed Central

    Soto, Matias; Esteva, Milton; Martínez-Romero, Oscar; Baez, Jesús; Elías-Zúñiga, Alex

    2015-01-01

    A methodology was developed for the prediction of the electrical properties of carbon nanotube-polymer nanocomposites via Monte Carlo computational simulations. A two-dimensional microstructure that takes into account waviness, fiber length and diameter distributions is used as a representative volume element. Fiber interactions in the microstructure are identified and then modeled as an equivalent electrical circuit, assuming one-third metallic and two-thirds semiconductor nanotubes. Tunneling paths in the microstructure are also modeled as electrical resistors, and crossing fibers are accounted for by assuming a contact resistance associated with them. The equivalent resistor network is then converted into a set of linear equations using nodal voltage analysis, which is then solved by means of the Gauss–Jordan elimination method. Nodal voltages are obtained for the microstructure, from which the percolation probability, equivalent resistance and conductivity are calculated. Percolation probability curves and electrical conductivity values are compared to those found in the literature. PMID:28793594

  2. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  3. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  4. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  5. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  6. Dispersal and individual quality in a long lived species

    USGS Publications Warehouse

    Cam, E.; Monnat, J.-Y.; Royle, J. Andrew

    2004-01-01

    The idea of differences in individual quality has been put forward in numerous long-term studies in long-lived species to explain differences in lifetime production among individuals. Despite the important role of individual heterogeneity in vital rates in demography, population dynamics and life history theory, the idea of 'individual quality' is elusive. It is sometimes assumed to be a static or dynamic individual characteristic. When considered as a dynamic trait, it is sometimes assumed to vary deterministically or stochastically, or to be confounded with the characteristics of the habitat. We addressed heterogeneity in reproductive performance among individuals established in higher-quality habitat in a long-lived seabird species. We used approaches to statistical inference based on individual random effects permitting quantification of heterogeneity in populations and assessment of individual variation from the population mean. We found evidence of heterogeneity in breeding probability, not success probability. We assessed the influence of dispersal on individual reproductive potential. Dispersal is likely to be destabilizing in species with high site and mate fidelity. We detected heterogeneity after dispersal, not before. Individuals may perform well regardless of quality before destabilization, including those that recruited in higher-quality habitat by chance, but only higher-quality individuals may be able to overcome the consequences of dispersal. Importantly, results differed when accounting for individual heterogeneity (an increase in mean breeding probability when individuals dispersed), or not (a decrease in mean breeding probability). In the latter case, the decrease in mean breeding probability may result from a substantial decrease in breeding probability in a few individuals and a slight increase in others. In other words, the pattern observed at the population mean level may not reflect what happens in the majority of individuals.

  7. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

    PubMed

    Willson, Stephen J

    2012-01-01

    A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

  8. Probabilistic seismic hazard in the San Francisco Bay area based on a simplified viscoelastic cycle model of fault interactions

    USGS Publications Warehouse

    Pollitz, F.F.; Schwartz, D.P.

    2008-01-01

    We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.

  9. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  10. A KINETIC MODEL FOR CELL DENSITY DEPENDENT BACTERIAL TRANSPORT IN POROUS MEDIA

    EPA Science Inventory

    A kinetic transport model with the ability to account for variations in cell density of the aqueous and solid phases was developed for bacteria in porous media. Sorption kinetics in the advective-dispersive-sorptive equation was described by assuming that adsorption was proportio...

  11. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  12. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  13. Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.

    PubMed

    Costello, Fintan; Watts, Paul

    2018-01-01

    We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.

  14. Chance-Constrained Missile-Procurement and Deployment Models for Naval Surface Warfare

    DTIC Science & Technology

    2005-03-01

    II iv . Missions in period II are assigned so that period-II scenarios are satisfied with a user-specified probability IIsp , which may depend on...feasible solution of RFFAM will successfully cover the period-II demands with probability IIsp if the MAP is followed (see Corollary 1 in Chapter...given set of scenarios D , and in particular any IIsp -feasible subset. Therefore, assume that the remainder vectors ˆsjr are listed in non-increasing

  15. Cargo Throughput and Survivability Trade-Offs in Force Sustainment Operations

    DTIC Science & Technology

    2008-06-01

    more correlation with direct human activity. Mines are able to simply ‘sit and wait,’ thus allowing for easier mathematical and statistical ...1.2) Since the ships will likely travel in groups along the same programmed GPS track, modeling several transitors to the identical path is assumed...setting of 1/2 was used for the actuation probability maximum. The ‘threat profile’ will give the probability that the nth transitor will hit a mine

  16. Repetitive pulses and laser-induced retinal injury thresholds

    NASA Astrophysics Data System (ADS)

    Lund, David J.

    2007-02-01

    Experimental studies with repetitively pulsed lasers show that the ED 50, expressed as energy per pulse, varies as the inverse fourth power of the number of pulses in the exposure, relatively independently of the wavelength, pulse duration, or pulse repetition frequency of the laser. Models based on a thermal damage mechanism cannot readily explain this result. Menendez et al. proposed a probability-summation model for predicting the threshold for a train of pulses based on the probit statistics for a single pulse. The model assumed that each pulse is an independent trial, unaffected by any other pulse in the train of pulses and assumes that the probability of damage for a single pulse is adequately described by the logistic curve. The requirement that the effect of each pulse in the pulse train be unaffected by the effects of other pulses in the train is a showstopper when the end effect is viewed as a thermal effect with each pulse in the train contributing to the end temperature of the target tissue. There is evidence that the induction of cell death by microcavitation bubbles around melanin granules heated by incident laser irradiation can satisfy the condition of pulse independence as required by the probability summation model. This paper will summarize the experimental data and discuss the relevance of the probability summation model given microcavitation as a damage mechanism.

  17. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  18. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  19. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  20. Fluctuation of densities of bacteriophages and Escherichia coli present in natural biofilms and water of a main channel and a small tributary.

    PubMed

    Hirotani, Hiroshi; Yu, Ma; Yamada, Takeshi

    2013-01-01

    Fluctuation of bacteriophage and Escherichia coli densities in naturally developed riverbed biofilms were investigated for a 1-year period. E. coli ranged from 1,500 to 15,500 most probable number (MPN)/100 mL and from 580 to 18,500 MPN/cm(2) in the main channel in the river water and biofilms, respectively. However, the fluctuations were much greater in the tributary, ranging from 0.8 to 100 MPN/100 mL and from 0.3 to 185 MPN/cm(2) in water and biofilms, respectively. The fluctuations of coliphages were also greater in the tributary than in the main channel. FRNA phage serotyping results indicated no significant differences in the source type of the fecal contamination in the main channel and tributary sampling stations. Significant correlations between phage groups in biofilms and water were found at both main channel and tributary. It was assumed that natural biofilms developed in the streambed captured and retained somatic phages in the biofilms for a certain period of time in the main channel site. At the location receiving constant and heavy contamination, the usage of phage indicators may provide additional information on the presence of viruses. In the small tributary it may be possible to estimate the virus concentration by monitoring the E. coli indicator.

  1. Mechanical Model for Dynamic Behavior of Concrete Under Impact Loading

    NASA Astrophysics Data System (ADS)

    Sun, Yuanxiang

    Concrete is a geo-material which is used substantively in the civil building and military safeguard. One coupled model of damage and plasticity to describe the complex behavior of concrete subjected to impact loading is proposed in this research work. The concrete is assumed as homogeneous continuum with pre-existing micro-cracks and micro-voids. Damage to concrete is caused due to micro-crack nucleation, growth and coalescence, and defined as the probability of fracture at a given crack density. It induces a decrease of strength and stiffness of concrete. Compaction of concrete is physically a collapse of the material voids. It produces the plastic strain in the concrete and, at the same time, an increase of the bulk modulus. In terms of crack growth model, micro-cracks are activated, and begin to propagate gradually. When crack density reaches a critical value, concrete takes place the smashing destroy. The model parameters for mortar are determined using plate impact experiment with uni-axial strain state. Comparison with the test results shows that the proposed model can give consistent prediction of the impact behavior of concrete. The proposed model may be used to design and analysis of concrete structures under impact and shock loading. This work is supported by State Key Laboratory of Explosion science and Technology, Beijing Institute of Technology (YBKT14-02).

  2. Double Bounce Component in Cross-Polarimetric SAR from a New Scattering Target Decomposition

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Hoon; Wdowinski, Shimon

    2013-08-01

    Common vegetation scattering theories assume that the Synthetic Aperture Radar (SAR) cross-polarization (cross-pol) signal represents solely volume scattering. We found this assumption incorrect based on SAR phase measurements acquired over the south Florida Everglades wetlands indicating that the cross-pol radar signal often samples the water surface beneath the vegetation. Based on these new observations, we propose that the cross-pol measurement consists of both volume scattering and double bounce components. The simplest multi-bounce scattering mechanism that generates cross-pol signal occurs by rotated dihedrals. Thus, we use the rotated dihedral mechanism with probability density function to revise some of the vegetation scattering theories and develop a three- component decomposition algorithm with single bounce, double bounce from both co-pol and cross-pol, and volume scattering components. We applied the new decomposition analysis to both urban and rural environments using Radarsat-2 quad-pol datasets. The decomposition of the San Francisco's urban area shows higher double bounce scattering and reduced volume scattering compared to other common three-component decomposition. The decomposition of the rural Everglades area shows that the relations between volume and cross-pol double bounce depend on the vegetation density. The new decomposition can be useful to better understand vegetation scattering behavior over the various surfaces and the estimation of above ground biomass using SAR observations.

  3. Fischer-Tropsch synthesis in near-critical n-hexane: Pressure-tuning effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochniak, D.J.; Subramaniam, B.

    For Fe-catalyzed Fischer-Tropsch (FT) synthesis with near-critical n-hexane (P{sub c} = 29.7 bar; T{sub c} = 233.7 C) as the reaction medium, isothermal pressure tuning from 1.2--2.4 P{sub c} (for n-hexane) at the reaction temperature (240 C) significantly changes syngas conversion and product selectivity. For fixed feed rates of syngas (H{sub 2}/CO = 0.5; 50 std. cm{sup 3}/g catalyst) and n-hexane (1 mL/min), syngas conversion attains a steady state at all pressures, increasing roughly threefold in this pressure range. Effective rate constants, estimated assuming a first-order dependence of syngas conversion on hydrogen, reveal that the catalyst effectiveness increases with pressuremore » implying the alleviation of pore-diffusion limitations. Pore accessibilities increase at higher pressures because the extraction of heavier hydrocarbons from the catalyst pores is enhanced by the liquid-like densities, yet better-than-liquid transport properties, of n-hexane. This explanation is consistent with the single {alpha} (= 0.78) Anderson-Schulz-Flory product distribution, the constant chain termination probability, and the higher primary product (1-olefin) selectivities ({approximately}80%) observed at the higher pressures. Results indicate that the pressure tunability of the density and transport properties of near-critical reaction media offers a powerful tool to optimize catalyst activity and product selectivity during FT reactions on supported catalysts.« less

  4. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    NASA Astrophysics Data System (ADS)

    Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.

    2018-03-01

    Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.

  5. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  6. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  7. Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.

    PubMed

    Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian

    2011-01-01

    Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.

  8. Proposal of a method for evaluating tsunami risk using response-surface methodology

    NASA Astrophysics Data System (ADS)

    Fukutani, Y.

    2017-12-01

    Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response-surface and Monte Carlo simulation without conducting multiple tsunami numerical simulations.

  9. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  10. Analytical performance evaluation of SAR ATR with inaccurate or estimated models

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.

    2004-09-01

    Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.

  11. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  12. Mathematical models of tissue stem and transit target cell divisions and the risk of radiation- or smoking-associated cancer

    PubMed Central

    Hendry, Jolyon H.

    2017-01-01

    There is compelling biological data to suggest that cancer arises from a series of mutations in single target cells, resulting in defects in cell renewal and differentiation processes which lead to malignancy. Because much mutagenic damage is expressed following cell division, more-rapidly renewing tissues could be at higher risk because of the larger number of cell replications. Cairns suggested that renewing tissues may reduce cancer risk by partitioning the dividing cell populations into lineages comprising infrequently-dividing long-lived stem cells and frequently-dividing short-lived daughter transit cells. We develop generalizations of three recent cancer-induction models that account for the joint maintenance and renewal of stem and transit cells, also competing processes of partially transformed cell proliferation and differentiation/apoptosis. We are particularly interested in using these models to separately assess the probabilities of mutation and development of cancer associated with “spontaneous” processes and with those linked to a specific environmental mutagen, specifically ionizing radiation or cigarette smoking. All three models demonstrate substantial variation in cancer risks, by at least 20 orders of magnitude, depending on the assumed number of critical mutations required for cancer, and the stem-cell and transition-cell mutation rates. However, in most cases the conditional probabilities of cancer being mutagen-induced range between 7–96%. The relative risks associated with mutagen exposure compared to background rates are also stable, ranging from 1.0–16.0. Very few cancers, generally <0.5%, arise from mutations occurring solely in stem cells rather than in a combination of stem and transit cells. However, for cancers with 2 or 3 critical mutations, a substantial proportion of cancers, in some cases 100%, have at least one mutation derived from a mutated stem cell. Little difference is made to relative risks if competing processes of proliferation and differentiation in the partially transformed stem and transit cell population are allowed for, nor is any difference made if one assumes that transit cells require an extra mutation to confer malignancy from the number required by stem cells. The probability of a cancer being mutagen-induced correlates across cancer sites with the estimated cumulative number of stem cell divisions in the associated tissue (p<0.05), although in some cases there is sensitivity of findings to removal of high-leverage outliers and in some cases only modest variation in probability, but these issues do not affect the validity of the findings. There are no significant correlations (p>0.3) between lifetime cancer-site specific radiation risk and the probability of that cancer being mutagen-induced. These results do not depend on the assumed critical number of mutations leading to cancer, or on the assumed mutagen-associated mutation rate, within the generally-accepted ranges tested. However, there are borderline significant negative correlations (p = 0.08) between the smoking-associated mortality rate difference (current vs former smokers) and the probability of cancer being mutagen-induced. This is only the case where values of the critical number of mutations leading to cancer, k, is 3 or 4 and not for smaller values (1 or 2), but does not strongly depend on the assumed mutagen-associated mutation rate. PMID:28196079

  13. Mathematical models of tissue stem and transit target cell divisions and the risk of radiation- or smoking-associated cancer.

    PubMed

    Little, Mark P; Hendry, Jolyon H

    2017-02-01

    There is compelling biological data to suggest that cancer arises from a series of mutations in single target cells, resulting in defects in cell renewal and differentiation processes which lead to malignancy. Because much mutagenic damage is expressed following cell division, more-rapidly renewing tissues could be at higher risk because of the larger number of cell replications. Cairns suggested that renewing tissues may reduce cancer risk by partitioning the dividing cell populations into lineages comprising infrequently-dividing long-lived stem cells and frequently-dividing short-lived daughter transit cells. We develop generalizations of three recent cancer-induction models that account for the joint maintenance and renewal of stem and transit cells, also competing processes of partially transformed cell proliferation and differentiation/apoptosis. We are particularly interested in using these models to separately assess the probabilities of mutation and development of cancer associated with "spontaneous" processes and with those linked to a specific environmental mutagen, specifically ionizing radiation or cigarette smoking. All three models demonstrate substantial variation in cancer risks, by at least 20 orders of magnitude, depending on the assumed number of critical mutations required for cancer, and the stem-cell and transition-cell mutation rates. However, in most cases the conditional probabilities of cancer being mutagen-induced range between 7-96%. The relative risks associated with mutagen exposure compared to background rates are also stable, ranging from 1.0-16.0. Very few cancers, generally <0.5%, arise from mutations occurring solely in stem cells rather than in a combination of stem and transit cells. However, for cancers with 2 or 3 critical mutations, a substantial proportion of cancers, in some cases 100%, have at least one mutation derived from a mutated stem cell. Little difference is made to relative risks if competing processes of proliferation and differentiation in the partially transformed stem and transit cell population are allowed for, nor is any difference made if one assumes that transit cells require an extra mutation to confer malignancy from the number required by stem cells. The probability of a cancer being mutagen-induced correlates across cancer sites with the estimated cumulative number of stem cell divisions in the associated tissue (p<0.05), although in some cases there is sensitivity of findings to removal of high-leverage outliers and in some cases only modest variation in probability, but these issues do not affect the validity of the findings. There are no significant correlations (p>0.3) between lifetime cancer-site specific radiation risk and the probability of that cancer being mutagen-induced. These results do not depend on the assumed critical number of mutations leading to cancer, or on the assumed mutagen-associated mutation rate, within the generally-accepted ranges tested. However, there are borderline significant negative correlations (p = 0.08) between the smoking-associated mortality rate difference (current vs former smokers) and the probability of cancer being mutagen-induced. This is only the case where values of the critical number of mutations leading to cancer, k, is 3 or 4 and not for smaller values (1 or 2), but does not strongly depend on the assumed mutagen-associated mutation rate.

  14. Optimization studies of the ITER low field side reflectometer.

    PubMed

    Diem, S J; Wilgen, J B; Bigelow, T S; Hanson, G R; Harvey, R W; Smirnov, A P

    2010-10-01

    Microwave reflectometry will be used on ITER to measure the electron density profile, density fluctuations due to MHD/turbulence, edge localized mode (ELM) density transients, and as an L-H transition monitor. The ITER low field side reflectometer system will measure both core and edge quantities using multiple antenna arrays spanning frequency ranges of 15-155 GHz for the O-mode system and 55-220 GHz for the X-mode system. Optimization studies using the GENRAY ray-tracing code have been done for edge and core measurements. The reflectometer launchers will utilize the HE11 mode launched from circular corrugated waveguide. The launched beams are assumed to be Gaussian with a beam waist diameter of 0.643 times the waveguide diameter. Optimum launcher size and placement are investigated by computing the antenna coupling between launchers, assuming the launched and received beams have a Gaussian beam pattern.

  15. Sn ion energy distributions of ns- and ps-laser produced plasmas

    NASA Astrophysics Data System (ADS)

    Bayerle, A.; Deuzeman, M. J.; van der Heijden, S.; Kurilovich, D.; de Faria Pinto, T.; Stodolna, A.; Witte, S.; Eikema, K. S. E.; Ubachs, W.; Hoekstra, R.; Versolato, O. O.

    2018-04-01

    Ion energy distributions arising from laser-produced plasmas of Sn are measured over a wide laser parameter space. Planar-solid and liquid-droplet targets are exposed to infrared laser pulses with energy densities between 1 J cm‑2 and 4 kJ cm‑2 and durations spanning 0.5 ps to 6 ns. The measured ion energy distributions are compared to two self-similar solutions of a hydrodynamic approach assuming isothermal expansion of the plasma plume into vacuum. For planar and droplet targets exposed to ps-long pulses, we find good agreement between the experimental results and the self-similar solution of a semi-infinite simple planar plasma configuration with an exponential density profile. The ion energy distributions resulting from solid Sn exposed to ns-pulses agrees with solutions of a limited-mass model that assumes a Gaussian-shaped initial density profile.

  16. Nonsimilar Solution for Shock Waves in a Rotational Axisymmetric Perfect Gas with a Magnetic Field and Exponentially Varying Density

    NASA Astrophysics Data System (ADS)

    Nath, G.; Sinha, A. K.

    2017-01-01

    The propagation of a cylindrical shock wave in an ideal gas in the presence of a constant azimuthal magnetic field with consideration for the axisymmetric rotational effects is investigated. The ambient medium is assumed to have the radial, axial, and azimuthal velocity components. The fluid velocities and density of the ambient medium are assumed to vary according to an exponential law. Nonsimilar solutions are obtained by taking into account the vorticity vector and its components. The dependences of the characteristics of the problem on the Alfven-Mach number and time are obtained. It is shown that the presence of a magnetic field has a decaying effect on the shock wave. The pressure and density are shown to vanish at the inner surface (piston), and hence a vacuum forms at the line of symmetry.

  17. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.

  18. A sequence-dependent rigid-base model of DNA

    NASA Astrophysics Data System (ADS)

    Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.

    2013-02-01

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.

  19. Analysis of the progressive failure of brittle matrix composites

    NASA Technical Reports Server (NTRS)

    Thomas, David J.

    1995-01-01

    This report investigates two of the most common modes of localized failures, namely, periodic fiber-bridged matrix cracks and transverse matrix cracks. A modification of Daniels' bundle theory is combined with Weibull's weakest link theory to model the statistical distribution of the periodic matrix cracking strength for an individual layer. Results of the model predictions are compared with experimental data from the open literature. Extensions to the model are made to account for possible imperfections within the layer (i.e., nonuniform fiber lengths, irregular crack spacing, and degraded in-situ fiber properties), and the results of these studies are presented. A generalized shear-lag analysis is derived which is capable of modeling the development of transverse matrix cracks in material systems having a general multilayer configuration and under states of full in-plane load. A method for computing the effective elastic properties for the damaged layer at the global level is detailed based upon the solution for the effects of the damage at the local level. This methodology is general in nature and is therefore also applicable to (0(sub m)/90(sub n))(sub s) systems. The characteristic stress-strain response for more general cases is shown to be qualitatively correct (experimental data is not available for a quantitative evaluation), and the damage evolution is recorded in terms of the matrix crack density as a function of the applied strain. Probabilistic effects are introduced to account for the statistical nature of the material strengths, thus allowing cumulative distribution curves for the probability of failure to be generated for each of the example laminates. Additionally, Oh and Finney's classic work on fracture location in brittle materials is extended and combined with the shear-lag analysis. The result is an analytical form for predicting the probability density function for the location of the next transverse crack occurrence within a crack bounded region. The results of this study verified qualitatively the validity of assuming a uniform crack spacing (as was done in the shear-lag model).

  20. Immortality of Cu damascene interconnects

    NASA Astrophysics Data System (ADS)

    Hau-Riege, Stefan P.

    2002-04-01

    We have studied short-line effects in fully-integrated Cu damascene interconnects through electromigration experiments on lines of various lengths and embedded in different dielectric materials. We compare these results with results from analogous experiments on subtractively-etched Al-based interconnects. It is known that Al-based interconnects exhibit three different behaviors, depending on the magnitude of the product of current density, j, and line length, L: For small values of (jL), no void nucleation occurs, and the line is immortal. For intermediate values, voids nucleate, but the line does not fail because the current can flow through the higher-resistivity refractory-metal-based shunt layers. Here, the resistance of the line increases but eventually saturates, and the relative resistance increase is proportional to (jL/B), where B is the effective elastic modulus of the metallization system. For large values of (jL/B), voiding leads to an unacceptably high resistance increase, and the line is considered failed. By contrast, we observed only two regimes for Cu-based interconnects: Either the resistance of the line stays constant during the duration of the experiment, and the line is considered immortal, or the line fails due to an abrupt open-circuit failure. The absence of an intermediate regime in which the resistance saturates is due to the absence of a shunt layer that is able to support a large amount of current once voiding occurs. Since voids nucleate much more easily in Cu- than in Al-based interconnects, a small fraction of short Cu lines fails even at low current densities. It is therefore more appropriate to consider the probability of immortality in the case of Cu rather than assuming a sharp boundary between mortality and immortality. The probability of immortality decreases with increasing amount of material depleted from the cathode, which is proportional to (jL2/B) at steady state. By contrast, the immortality of Al-based interconnects is described by (jL) if no voids nucleate, and (jL/B) if voids nucleate.

  1. A sequence-dependent rigid-base model of DNA.

    PubMed

    Gonzalez, O; Petkevičiūtė, D; Maddocks, J H

    2013-02-07

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.

  2. Point count length and detection of forest neotropical migrant birds

    USGS Publications Warehouse

    Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.

  3. Individual heterogeneity and identifiability in capture-recapture models

    USGS Publications Warehouse

    Link, W.A.

    2004-01-01

    Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.

  4. Net present value approaches for drug discovery.

    PubMed

    Svennebring, Andreas M; Wikberg, Jarl Es

    2013-12-01

    Three dedicated approaches to the calculation of the risk-adjusted net present value (rNPV) in drug discovery projects under different assumptions are suggested. The probability of finding a candidate drug suitable for clinical development and the time to the initiation of the clinical development is assumed to be flexible in contrast to the previously used models. The rNPV of the post-discovery cash flows is calculated as the probability weighted average of the rNPV at each potential time of initiation of clinical development. Practical considerations how to set probability rates, in particular during the initiation and termination of a project is discussed.

  5. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  6. Analysis of a semiclassical model for rotational transition probabilities. [in highly nonequilibrium flow of diatomic molecules

    NASA Technical Reports Server (NTRS)

    Deiwert, G. S.; Yoshikawa, K. K.

    1975-01-01

    A semiclassical model proposed by Pearson and Hansen (1974) for computing collision-induced transition probabilities in diatomic molecules is tested by the direct-simulation Monte Carlo method. Specifically, this model is described by point centers of repulsion for collision dynamics, and the resulting classical trajectories are used in conjunction with the Schroedinger equation for a rigid-rotator harmonic oscillator to compute the rotational energy transition probabilities necessary to evaluate the rotation-translation exchange phenomena. It is assumed that a single, average energy spacing exists between the initial state and possible final states for a given collision.

  7. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  8. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  9. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  10. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  12. Modeling KBOs Charon, Orcus and Salacia by means of a new equation of state for porous icy bodies

    NASA Astrophysics Data System (ADS)

    Malamud, U.; Prialnik, D.

    2015-10-01

    We use a one-dimensional adaptive-grid thermal evolution code to model intermediate sized Kuiper belt objects Charon, Orcus and Salacia and compare their measured bulk densities with those resulting from evolutionary calculations at the end of 4.6 Gyr. Our model assumes an initial homogeneous composition of mixed ice and rock, and follows the multiphase flow of water through the porous rocky medium, consequent differentiation and aqueous chemical alterations in the rock. Heating sources include long-lived radionuclides, serpentinization reactions, release of gravitational potential energy due to compaction, and crystallization of amorphous ice. The density profile is calculated by assuming hydrostatic equilibrium to be maintained through changes in composition, pressure and temperature. To this purpose, we construct an equation of state suitable for porous icy bodies with radii of a few hundred km, based on the best available empirical studies of ice and rock compaction, and on comparisons with rock porosities in Earth analog and Solar System silicates. We show that the observed bulk densities can be reproduced by assuming the same set of initial and physical parameters, including the same rock/ice mass ratio for all three bodies. We conclude that the mass of the object uniquely determines the evolution of porosity, and thus explains the observed differences in bulk density. The final structure of all three objects is differentiated, with an inner rocky core, and outer ice-enriched mantle. The degree of differentiation, too, is determined by the object's mass.

  13. Modeling Kuiper belt objects Charon, Orcus and Salacia by means of a new equation of state for porous icy bodies

    NASA Astrophysics Data System (ADS)

    Malamud, Uri; Prialnik, Dina

    2015-01-01

    We use a one-dimensional adaptive-grid thermal evolution code to model Kuiper belt objects Charon, Orcus and Salacia and compare their measured bulk densities with those resulting from evolutionary calculations at the end of 4.6 Gyr. Our model assumes an initial homogeneous composition of mixed ice and rock, and follows the multiphase flow of water through the porous rocky medium, consequent differentiation and aqueous chemical alterations in the rock. Heating sources include long-lived radionuclides, serpentinization reactions, release of gravitational potential energy due to compaction, and crystallization of amorphous ice. The density profile is calculated by assuming hydrostatic equilibrium to be maintained through changes in composition, pressure and temperature. To this purpose, we construct an equation of state suitable for porous icy bodies with radii of a few hundred km, based on the best available empirical studies of ice and rock compaction, and on comparisons with rock porosities in Earth analog and Solar System silicates. We show that the observed bulk densities can be reproduced by assuming the same set of initial and physical parameters, including the same rock/ice mass ratio for all three bodies. We conclude that the mass of the object uniquely determines the evolution of porosity, and thus explains the observed differences in bulk density. The final structure of all three objects is differentiated, with an inner rocky core, and outer ice-enriched mantle. The degree of differentiation, too, is determined by the object's mass.

  14. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  15. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  16. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  17. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  18. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  19. Proton elastic scattering at 200 A MeV and high momentum transfers of 1.7-2.7 fm-1 as a probe of the nuclear matter density of 6He

    NASA Astrophysics Data System (ADS)

    Chebotaryov, S.; Sakaguchi, S.; Uesaka, T.; Akieda, T.; Ando, Y.; Assie, M.; Beaumel, D.; Chiga, N.; Dozono, M.; Galindo-Uribarri, A.; Heffron, B.; Hirayama, A.; Isobe, T.; Kaki, K.; Kawase, S.; Kim, W.; Kobayashi, T.; Kon, H.; Kondo, Y.; Kubota, Y.; Leblond, S.; Lee, H.; Lokotko, T.; Maeda, Y.; Matsuda, Y.; Miki, K.; Milman, E.; Motobayashi, T.; Mukai, T.; Nakai, S.; Nakamura, T.; Ni, A.; Noro, T.; Ota, S.; Otsu, H.; Ozaki, T.; Panin, V.; Park, S.; Saito, A.; Sakai, H.; Sasano, M.; Sato, H.; Sekiguchi, K.; Shimizu, Y.; Stefan, I.; Stuhl, L.; Takaki, M.; Taniue, K.; Tateishi, K.; Terashima, S.; Togano, Y.; Tomai, T.; Wada, Y.; Wakasa, T.; Wakui, T.; Watanabe, A.; Yamada, H.; Yang, Zh; Yasuda, M.; Yasuda, J.; Yoneda, K.; Zenihiro, J.

    2018-05-01

    Differential cross sections of p-^6He elastic scattering were measured in inverse kinematics at an incident energy of 200 A MeV, covering the high momentum transfer region of 1.7-2.7 fm^{-1}. The sensitivity of the elastic scattering at low and high momentum transfers to the density distribution was investigated quantitatively using relativistic impulse approximation calculations. In the high momentum transfer region, where the present data were taken, the differential cross section has an order of magnitude higher sensitivity to the inner part of the ^6He density relative to the peripheral part (15:1). This feature makes the obtained data valuable for the deduction of the inner part of the ^6He density. The data were compared to a set of calculations assuming different proton and neutron density profiles of ^6He. The data are well reproduced by the calculation assuming almost the same profiles of proton and neutron densities around the center of ^6He, and a proton profile reproducing the known point-proton radius of 1.94 fm. This finding is consistent with the assumption that the ^6He nucleus consists of a rigid α-like core with a two-neutron halo.

  20. The ranking probability approach and its usage in design and analysis of large-scale studies.

    PubMed

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

Top