Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.
2013-11-01
A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.
Rufibach, Kaspar; Burger, Hans Ulrich; Abt, Markus
2016-09-01
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u-shape very similar, but not equal, to a β-distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2016-04-01
There is a large amount of functional genetic data available, which can be used to inform fine-mapping association studies (in diseases with well-characterised disease pathways). Single nucleotide polymorphism (SNP) prioritization via Bayes factors is attractive because prior information can inform the effect size or the prior probability of causal association. This approach requires the specification of the effect size. If the information needed to estimate a priori the probability density for the effect sizes for causal SNPs in a genomic region isn't consistent or isn't available, then specifying a prior variance for the effect sizes is challenging. We propose both an empirical method to estimate this prior variance, and a coherent approach to using SNP-level functional data, to inform the prior probability of causal association. Through simulation we show that when ranking SNPs by our empirical Bayes factor in a fine-mapping study, the causal SNP rank is generally as high or higher than the rank using Bayes factors with other plausible values of the prior variance. Importantly, we also show that assigning SNP-specific prior probabilities of association based on expert prior functional knowledge of the disease mechanism can lead to improved causal SNPs ranks compared to ranking with identical prior probabilities of association. We demonstrate the use of our methods by applying the methods to the fine mapping of the CASP8 region of chromosome 2 using genotype data from the Collaborative Oncological Gene-Environment Study (COGS) Consortium. The data we analysed included approximately 46,000 breast cancer case and 43,000 healthy control samples. © 2016 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
How likely are constituent quanta to initiate inflation?
Berezhiani, Lasha; Trodden, Mark
2015-08-06
In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
USDA-ARS?s Scientific Manuscript database
Data assimilation and regression are two commonly used methods for predicting agricultural yield from remote sensing observations. Data assimilation is a generative approach because it requires explicit approximations of the Bayesian prior and likelihood to compute the probability density function...
1984-09-28
variables before simula- tion of model - Search for reality checks a, - Express uncertainty as a probability density distribution. a. H2 a, H-22 TWIF... probability that the software con- tains errors. This prior is updated as test failure data are accumulated. Only a p of 1 (software known to contain...discusssed; both parametric and nonparametric versions are presented. It is shown by the author that the bootstrap underlies the jackknife method and
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Model Considerations for Memory-based Automatic Music Transcription
NASA Astrophysics Data System (ADS)
Albrecht, Štěpán; Šmídl, Václav
2009-12-01
The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.
Ali, Anjum A; Dale, Anders M; Badea, Alexandra; Johnson, G Allan
2005-08-15
We present the automated segmentation of magnetic resonance microscopy (MRM) images of the C57BL/6J mouse brain into 21 neuroanatomical structures, including the ventricular system, corpus callosum, hippocampus, caudate putamen, inferior colliculus, internal capsule, globus pallidus, and substantia nigra. The segmentation algorithm operates on multispectral, three-dimensional (3D) MR data acquired at 90-microm isotropic resolution. Probabilistic information used in the segmentation is extracted from training datasets of T2-weighted, proton density-weighted, and diffusion-weighted acquisitions. Spatial information is employed in the form of prior probabilities of occurrence of a structure at a location (location priors) and the pairwise probabilities between structures (contextual priors). Validation using standard morphometry indices shows good consistency between automatically segmented and manually traced data. Results achieved in the mouse brain are comparable with those achieved in human brain studies using similar techniques. The segmentation algorithm shows excellent potential for routine morphological phenotyping of mouse models.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Causal illusions in children when the outcome is frequent
2017-01-01
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294
Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.
2011-01-01
Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.
Fennell, John; Baddeley, Roland
2012-10-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Estimating the probability that the Taser directly causes human ventricular fibrillation.
Sun, H; Haemmerich, D; Rahko, P S; Webster, J G
2010-04-01
This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.
Improving chemical species tomography of turbulent flows using covariance estimation.
Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J
2017-05-01
Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.
Safe Onboard Guidance and Control Under Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars James
2011-01-01
An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.
Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2015-10-01
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.
Bayesian ionospheric multi-instrument 3D tomography
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Vierinen, Juha; Roininen, Lassi
2017-04-01
The tomographic reconstruction of ionospheric electron densities is an inverse problem that cannot be solved without relatively strong regularising additional information. % Especially the vertical electron density profile is determined predominantly by the regularisation. % %Often utilised regularisations in ionospheric tomography include smoothness constraints and iterative methods with initial ionospheric models. % Despite its crucial role, the regularisation is often hidden in the algorithm as a numerical procedure without physical understanding. % % The Bayesian methodology provides an interpretative approach for the problem, as the regularisation can be given in a physically meaningful and quantifiable prior probability distribution. % The prior distribution can be based on ionospheric physics, other available ionospheric measurements and their statistics. % Updating the prior with measurements results as the posterior distribution that carries all the available information combined. % From the posterior distribution, the most probable state of the ionosphere can then be solved with the corresponding probability intervals. % Altogether, the Bayesian methodology provides understanding on how strong the given regularisation is, what is the information gained with the measurements and how reliable the final result is. % In addition, the combination of different measurements and temporal development can be taken into account in a very intuitive way. However, a direct implementation of the Bayesian approach requires inversion of large covariance matrices resulting in computational infeasibility. % In the presented method, Gaussian Markov random fields are used to form a sparse matrix approximations for the covariances. % The approach makes the problem computationally feasible while retaining the probabilistic and physical interpretation. Here, the Bayesian method with Gaussian Markov random fields is applied for ionospheric 3D tomography over Northern Europe. % Multi-instrument measurements are utilised from TomoScand receiver network for Low Earth orbit beacon satellite signals, GNSS receiver networks, as well as from EISCAT ionosondes and incoherent scatter radars. % %The performance is demonstrated in three-dimensional spatial domain with temporal development also taken into account.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2015-05-01
Bayes factors (BFs) are becoming increasingly important tools in genetic association studies, partly because they provide a natural framework for including prior information. The Wakefield BF (WBF) approximation is easy to calculate and assumes a normal prior on the log odds ratio (logOR) with a mean of zero. However, the prior variance (W) must be specified. Because of the potentially high sensitivity of the WBF to the choice of W, we propose several new BF approximations with logOR ∼N(0,W), but allow W to take a probability distribution rather than a fixed value. We provide several prior distributions for W which lead to BFs that can be calculated easily in freely available software packages. These priors allow a wide range of densities for W and provide considerable flexibility. We examine some properties of the priors and BFs and show how to determine the most appropriate prior based on elicited quantiles of the prior odds ratio (OR). We show by simulation that our novel BFs have superior true-positive rates at low false-positive rates compared to those from both P-value and WBF analyses across a range of sample sizes and ORs. We give an example of utilizing our BFs to fine-map the CASP8 region using genotype data on approximately 46,000 breast cancer case and 43,000 healthy control samples from the Collaborative Oncological Gene-environment Study (COGS) Consortium, and compare the single-nucleotide polymorphism ranks to those obtained using WBFs and P-values from univariate logistic regression. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Estimating the number of terrestrial organisms on the moon.
NASA Technical Reports Server (NTRS)
Dillon, R. T.; Gavin, W. R.; Roark, A. L.; Trauth, C. A., Jr.
1973-01-01
Methods used to obtain estimates for the biological loadings on moon bound spacecraft prior to launch are reviewed, along with the mathematical models used to calculate the microorganism density on the lunar surface (such as it results from contamination deposited by manned and unmanned flights) and the probability of lunar soil sample contamination. Some of the results obtained by the use of a lunar inventory system based on these models are presented.
ERIC Educational Resources Information Center
Huynh, Huynh
By noting that a Rasch or two parameter logistic (2PL) item belongs to the exponential family of random variables and that the probability density function (pdf) of the correct response (X=1) and the incorrect response (X=0) are symmetric with respect to the vertical line at the item location, it is shown that the conjugate prior for ability is…
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
A method to compute SEU fault probabilities in memory arrays with error correction
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.
On the Origins of Suboptimality in Human Probabilistic Inference
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M.
2014-01-01
Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior. PMID:24945142
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
A poisson process model for hip fracture risk.
Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S
2010-08-01
The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.
Positive contraction mappings for classical and quantum Schrödinger systems
NASA Astrophysics Data System (ADS)
Georgiou, Tryphon T.; Pavon, Michele
2015-03-01
The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.
Constraining the interior density profile of a Jovian planet from precision gravity field data
NASA Astrophysics Data System (ADS)
Movshovitz, Naor; Fortney, Jonathan J.; Helled, Ravit; Hubbard, William B.; Thorngren, Daniel; Mankovich, Chris; Wahl, Sean; Militzer, Burkhard; Durante, Daniele
2017-10-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properly interpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about the formation mechanism of the planet. Planetary gravity fields are usually described by the coefficients in an expansion of the gravitational potential. Recently, high precision measurements of these coefficients for Jupiter and Saturn have been made by the radio science instruments on the Juno and Cassini spacecraft, respectively.The resulting coefficients come with an associated uncertainty. And while the task of matching a given density profile with a given set of gravity coefficients is relatively straightforward, the question of how best to account for the uncertainty is not. In essentially all prior work on matching models to gravity field data, inferences about planetary structure have rested on imperfect knowledge of the H/He equation of state and on the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet, constrained only by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them.We demonstrate this approach with a sample of Jupiter interior models based on recent Juno data and discuss prospects for Saturn.
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.
2017-12-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.
The Chandra Source Catalog: X-ray Aperture Photometry
NASA Astrophysics Data System (ADS)
Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.
Kanis, John A; Harvey, Nicholas C; Cooper, Cyrus; Johansson, Helena; Odén, Anders; McCloskey, Eugene V
2016-01-01
In most assessment guidelines, treatment for osteoporosis is recommended in individuals with prior fragility fractures, especially fractures at spine and hip. However, for those without prior fractures, the intervention thresholds can be derived using different methods. The aim of this report was to undertake a systematic review of the available information on the use of FRAX® in assessment guidelines, in particular the setting of thresholds and their validation. We identified 120 guidelines or academic papers that incorporated FRAX of which 38 provided no clear statement on how the fracture probabilities derived are to be used in decision-making in clinical practice. The remainder recommended a fixed intervention threshold (n=58), most commonly as a component of more complex guidance (e.g. bone mineral density (BMD) thresholds) or an age-dependent threshold (n=22). Two guidelines have adopted both age-dependent and fixed thresholds. Fixed probability thresholds have ranged from 4 to 20 % for a major fracture and 1.3-5 % for hip fracture. More than one half (39) of the 58 publications identified utilized a threshold probability of 20 % for a major osteoporotic fracture, many of which also mention a hip fracture probability of 3 % as an alternative intervention threshold. In nearly all instances, no rationale is provided other than that this was the threshold used by the National Osteoporosis Foundation of the US. Where undertaken, fixed probability thresholds have been determined from tests of discrimination (Hong Kong), health economic assessment (US, Switzerland), to match the prevalence of osteoporosis (China) or to align with pre-existing guidelines or reimbursement criteria (Japan, Poland). Age-dependent intervention thresholds, first developed by the National Osteoporosis Guideline Group (NOGG), are based on the rationale that if a woman with a prior fragility fracture is eligible for treatment, then, at any given age, a man or woman with the same fracture probability but in the absence of a previous fracture (i.e. at the ‘fracture threshold’) should also be eligible. Under current NOGG guidelines, based on age-dependent probability thresholds, inequalities in access to therapy arise especially at older ages (≥ 70 years) depending on the presence or absence of a prior fracture. An alternative threshold using a hybrid model reduces this disparity. The use of FRAX (fixed or age-dependent thresholds) as the gateway to assessment identifies individuals at high risk more effectively than the use of BMD. However, the setting of intervention thresholds need to be country-specific. PMID:27465509
Self-Supervised Dynamical Systems
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.
Pluskiewicz, W; Adamczyk, P; Czekajło, A; Grzeszczak, W; Drozdzowska, B
2015-12-01
In 770 postmenopausal women, the fracture incidence during a 4-year follow-up was analyzed in relation to the fracture probability (FRAX risk assessment tool) and risk (Garvan risk calculator) predicted at baseline. Incident fractures occurred in 62 subjects with a higher prevalence in high-risk subgroups. Prior fracture, rheumatoid arthritis, femoral neck T-score and falls increased independent of fracture incidence. The aim of the study was to analyze the incidence of fractures during a 4-year follow-up in relation to the baseline fracture probability and risk. Enrolled in the study were 770 postmenopausal women with a mean age of 65.7 ± 7.3 years. Bone mineral density (BMD) at the proximal femur, clinical data, and fracture probability using the FRAX tool and risk using the Garvan calculator were determined. Each subject was asked yearly by phone call about the incidence of fracture during the follow-up period. Of the 770 women, 62 had a fracture during follow-up, and 46 had a major fracture. At baseline, BMD was significantly lower, and fracture probability and fracture risk were significantly higher in women who had a fracture. Among women with a major fracture, the percentage with a high baseline fracture probability (>10 %) was significantly higher than among those without a fracture (p < 0.01). Fracture incidence during follow-up was significantly higher among women with a high baseline fracture probability (12.7 % vs. 5.2 %) and a high fracture risk (9.2 vs. 5.3 %) so that the "fracture-free survival" curves were significantly different (p < 0.05). The number of clinical risk factors noted at baseline was significantly associated with fracture incidence (chi-squared = 20.82, p < 0.01). Prior fracture, rheumatoid arthritis, and femoral neck T-score were identified as significant risk factors for major fractures (for any fractures, the influence of falls was also significant). During follow-up, fracture incidence was predicted by baseline fracture probability (FRAX risk assessment tool) and risk (Garvan risk calculator). A number of clinical risk factors and a prior fracture, rheumatoid arthritis, femoral neck T-score, and falls were independently associated with an increased incidence of fractures. [Corrected
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
NASA Astrophysics Data System (ADS)
Cimermanová, K.
2009-01-01
In this paper we illustrate the influence of prior probabilities of diseases on diagnostic reasoning. For various prior probabilities of classified groups characterized by volatile organic compounds of breath profile, smokers and non-smokers, we constructed the ROC curve and the Youden index with related asymptotic pointwise confidence intervals.
Pig Data and Bayesian Inference on Multinomial Probabilities
ERIC Educational Resources Information Center
Kern, John C.
2006-01-01
Bayesian inference on multinomial probabilities is conducted based on data collected from the game Pass the Pigs[R]. Prior information on these probabilities is readily available from the instruction manual, and is easily incorporated in a Dirichlet prior. Posterior analysis of the scoring probabilities quantifies the discrepancy between empirical…
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
Elapsed decision time affects the weighting of prior probability in a perceptual decision task
Hanks, Timothy D.; Mazurek, Mark E.; Kiani, Roozbeh; Hopp, Elizabeth; Shadlen, Michael N.
2012-01-01
Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (i) decisions that linger tend to arise from less reliable evidence, and (ii) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal cortex (LIP) of rhesus monkeys performing this task. PMID:21525274
Elapsed decision time affects the weighting of prior probability in a perceptual decision task.
Hanks, Timothy D; Mazurek, Mark E; Kiani, Roozbeh; Hopp, Elisabeth; Shadlen, Michael N
2011-04-27
Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (1) decisions that linger tend to arise from less reliable evidence, and (2) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal area (LIP) of rhesus monkeys performing this task.
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.
2017-12-01
Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
On trends in historical marine wind data
NASA Technical Reports Server (NTRS)
Cardone, Vincent J.; Greenwood, Juliet G.; Cane, Mark A.
1990-01-01
Long-period variations which include a trend toward strengthening winds over the last three decades have on the one hand been suggested to be real climatic changes, and on the other artifacts of the evolution of measuring techniques. An examination is presently conducted of individual ship reports from three regions with high data densities, in order to resolve this dispute. Even with corrections for instrumental effects, the pre-1950 winds appear weaker than post-1950 winds; the most probable explanation is the absence of universal sea state and Beaufort force standards prior to 1946.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rémy, Alice; Le Galliard, Jean-François; Odden, Morten; Andreassen, Harry P
2014-07-01
During the settlement stage of dispersal, the outcome of conflicts between residents and immigrants should depend on the social organization of resident populations as well as on individual traits of immigrants, such as their age class, body mass and/or behaviour. We have previously shown that spatial distribution of food influences the social organization of female bank voles (Myodes glareolus). Here, we aimed to determine the relative impact of food distribution and immigrant age class on the success and demographic consequences of female bank vole immigration. We manipulated the spatial distribution of food within populations having either clumped or dispersed food. After a pre-experimental period, we released either adult immigrants or juvenile immigrants, for which we scored sociability and aggressiveness prior to introduction. We found that immigrant females survived less well and moved more between populations than resident females, which suggest settlement costs. However, settled juvenile immigrants had a higher probability to reproduce than field-born juveniles. Food distribution had little effects on the settlement success of immigrant females. Survival and settlement probabilities of immigrants were influenced by adult female density in opposite ways for adult and juvenile immigrants, suggesting a strong adult-adult competition. Moreover, females of higher body mass at release had a lower probability to survive, and the breeding probability of settled immigrants increased with their aggressiveness and decreased with their sociability. Prior to the introduction of immigrants, resident females were more aggregated in the clumped food treatment than in the dispersed food treatment, but immigration reversed this relationship. In addition, differences in growth trajectories were seen during the breeding season, with populations reaching higher densities when adult immigrants were introduced in a plot with dispersed food, or when juvenile immigrants were introduced in a plot with clumped food. These results indicate the relative importance of intrinsic and extrinsic factors on immigration success and demographic consequences of dispersal and are of relevance to conservation actions, such as reinforcement of small populations. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.
Biedermann, Alex; Taroni, Franco; Margot, Pierre
2012-01-31
Prior probabilities represent a core element of the Bayesian probabilistic approach to relatedness testing. This letter opinions on the commentary Use of prior odds for missing persons identifications by Budowle et al., published recently in this journal. Contrary to Budowle et al., we argue that the concept of prior probabilities (i) is not endowed with the notion of objectivity, (ii) is not a case for computation, and (iii) does not require new guidelines edited by the forensic DNA community--as long as probability is properly considered as an expression of personal belief.
Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference
Wesson, R.L.; Bakun, W.H.; Perkins, D.M.
2003-01-01
Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Entropy from State Probabilities: Hydration Entropy of Cations
2013-01-01
Entropy is an important energetic quantity determining the progression of chemical processes. We propose a new approach to obtain hydration entropy directly from probability density functions in state space. We demonstrate the validity of our approach for a series of cations in aqueous solution. Extensive validation of simulation results was performed. Our approach does not make prior assumptions about the shape of the potential energy landscape and is capable of calculating accurate hydration entropy values. Sampling times in the low nanosecond range are sufficient for the investigated ionic systems. Although the presented strategy is at the moment limited to systems for which a scalar order parameter can be derived, this is not a principal limitation of the method. The strategy presented is applicable to any chemical system where sufficient sampling of conformational space is accessible, for example, by computer simulations. PMID:23651109
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Arterial tree tracking from anatomical landmarks in magnetic resonance angiography scans
NASA Astrophysics Data System (ADS)
O'Neil, Alison; Beveridge, Erin; Houston, Graeme; McCormick, Lynne; Poole, Ian
2014-03-01
This paper reports on arterial tree tracking in fourteen Contrast Enhanced MRA volumetric scans, given the positions of a predefined set of vascular landmarks, by using the A* algorithm to find the optimal path for each vessel based on voxel intensity and a learnt vascular probability atlas. The algorithm is intended for use in conjunction with an automatic landmark detection step, to enable fully automatic arterial tree tracking. The scan is filtered to give two further images using the top-hat transform with 4mm and 8mm cubic structuring elements. Vessels are then tracked independently on the scan in which the vessel of interest is best enhanced, as determined from knowledge of typical vessel diameter and surrounding structures. A vascular probability atlas modelling expected vessel location and orientation is constructed by non-rigidly registering the training scans to the test scan using a 3D thin plate spline to match landmark correspondences, and employing kernel density estimation with the ground truth center line points to form a probability density distribution. Threshold estimation by histogram analysis is used to segment background from vessel intensities. The A* algorithm is run using a linear cost function constructed from the threshold and the vascular atlas prior. Tracking results are presented for all major arteries excluding those in the upper limbs. An improvement was observed when tracking was informed by contextual information, with particular benefit for peripheral vessels.
From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses
Zenker, Sven; Rubin, Jonathan; Clermont, Gilles
2007-01-01
The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses. We outline possible steps toward translating this computational approach to the bedside, to supplement today's evidence-based medicine with a quantitatively founded model-based medicine that integrates mechanistic knowledge with patient-specific information. PMID:17997590
Multiple proximate and ultimate causes of natal dispersal in white-tailed deer
Long, E.S.; Diefenbach, D.R.; Rosenberry, C.S.; Wallingford, B.D.
2008-01-01
Proximate and ultimate causes of dispersal in vertebrates vary, and relative importance of these causes is poorly understood. Among populations, inter- and intrasexual social cues for dispersal are thought to reduce inbreeding and local mate competition, respectively, and specific emigration cue may affect dispersal distance, such that inbreeding avoidance dispersal tends to be farther than dispersal to reduce local competition. To investigate potential occurrence of multiple proximate and ultimate causes of dispersal within populations, we radio-marked 363 juvenile male white-tailed deer (Odocoileus virginianus) in 2 study areas in Pennsylvania. Natal dispersal probability and distance were monitored over a 3-year period when large-scale management changes reduced density of adult females and increased density of adult males. Most dispersal (95-97%) occurred during two 12-week periods: spring, when yearling males still closely associate with related females, and prior to fall breeding season, when yearling males closely associate with other breeding-age males. Following changes to sex and age structure that reduced potential for inbreeding and increased potential for mate competition, annual dispersal probability did not change; however, probability of spring dispersal decreased, whereas probability of fall dispersal increased. Spring dispersal distances were greater than fall dispersal distances, suggesting that adaptive inbreeding avoidance dispersal requires greater distance than mate competition dispersal where opposite-sex relatives are philopatric and populations are not patchily distributed. Both inbreeding avoidance and mate competition are important ultimate causes of dispersal of white-tailed deer, but ultimate motivations for dispersal are proximately cued by different social mechanisms and elicit different responses in dispersers.
A robust power spectrum split cancellation-based spectrum sensing method for cognitive radio systems
NASA Astrophysics Data System (ADS)
Qi, Pei-Han; Li, Zan; Si, Jiang-Bo; Gao, Rui
2014-12-01
Spectrum sensing is an essential component to realize the cognitive radio, and the requirement for real-time spectrum sensing in the case of lacking prior information, fading channel, and noise uncertainty, indeed poses a major challenge to the classical spectrum sensing algorithms. Based on the stochastic properties of scalar transformation of power spectral density (PSD), a novel spectrum sensing algorithm, referred to as the power spectral density split cancellation method (PSC), is proposed in this paper. The PSC makes use of a scalar value as a test statistic, which is the ratio of each subband power to the full band power. Besides, by exploiting the asymptotic normality and independence of Fourier transform, the distribution of the ratio and the mathematical expressions for the probabilities of false alarm and detection in different channel models are derived. Further, the exact closed-form expression of decision threshold is calculated in accordance with Neyman—Pearson criterion. Analytical and simulation results show that the PSC is invulnerable to noise uncertainty, and can achive excellent detection performance without prior knowledge in additive white Gaussian noise and flat slow fading channels. In addition, the PSC benefits from a low computational cost, which can be completed in microseconds.
NASA Astrophysics Data System (ADS)
Ruiz, Rafael O.; Meruane, Viviana
2017-06-01
The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.
NASA Astrophysics Data System (ADS)
Frederick, B. C.; Gooch, B. T.; Richter, T.; Young, D. A.; Blankenship, D. D.; Aitken, A.; Siegert, M. J.
2013-12-01
Topography, sediment distribution and heat flux are all key boundary conditions governing the stability of the East Antarctic ice sheet (EAIS). Recent scientific scrutiny has been focused on several large, deep, interior EAIS basins including the submarine basal topography characterizing the Aurora Subglacial Basin (ASB). Numerical ice sheet models require accurate deformable sediment distribution and lithologic character constraints to estimate overall flow velocities and potential instability. To date, such estimates across the ASB have been derived from low-resolution satellite data or historic aerogeophysical surveys conducted prior to the advent of GPS. These rough basal condition estimates have led to poorly-constrained ice sheet stability models for this remote 200,000 sq km expanse of the ASB. Here we present a significantly improved quantitative model characterizing the subglacial lithology and sediment in the ASB region. The product of comprehensive ICECAP (2008-2013) aerogeophysical data processing, this sedimentary basin model details the expanse and thickness of probable Wilkes Land subglacial sedimentary deposits and density contrast boundaries indicative of distinct subglacial lithologic units. As part of the process, BEDMAP2 subglacial topographic results were improved through the additional incorporation of ice-penetrating radar data collected during ICECAP field seasons 2010-2013. Detailed potential field data pre-processing was completed as well as a comprehensive evaluation of crustal density contrasts based on the gravity power spectrum, a subsequent high pass data filter was also applied to remove longer crustal wavelengths from the gravity dataset prior to inversion. Gridded BEDMAP2+ ice and bed radar surfaces were then utilized to establish bounding density models for the 3D gravity inversion process to yield probable sedimentary basin anomalies. Gravity inversion results were iteratively evaluated against radar along-track RMS deviation and gravity and magnetic depth to basement results. This geophysical data processing methodology provides a substantial improvement over prior Wilkes Land sedimentary basin estimates yielding a higher resolution model based upon iteration of several aerogeophysical datasets concurrently. This more detailed subglacial sedimentary basin model for Wilkes Land, East Antarctica will not only contribute to vast improvements on EAIS ice sheet model constraints, but will also provide significant quantifiable controls for subglacial hydrologic and geothermal flux estimates that are also sizable contributors to the cold-based, deep interior basal ice dynamics dominant in the Wilkes Land region.
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
A Bayesian approach to modeling 2D gravity data using polygon states
NASA Astrophysics Data System (ADS)
Titus, W. J.; Titus, S.; Davis, J. R.
2015-12-01
We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.
Prior probability modulates anticipatory activity in category-specific areas.
Trapp, Sabrina; Lepsien, Jöran; Kotz, Sonja A; Bar, Moshe
2016-02-01
Bayesian models are currently a dominant framework for describing human information processing. However, it is not clear yet how major tenets of this framework can be translated to brain processes. In this study, we addressed the neural underpinning of prior probability and its effect on anticipatory activity in category-specific areas. Before fMRI scanning, participants were trained in two behavioral sessions to learn the prior probability and correct order of visual events within a sequence. The events of each sequence included two different presentations of a geometric shape and one picture of either a house or a face, which appeared with either a high or a low likelihood. Each sequence was preceded by a cue that gave participants probabilistic information about which items to expect next. This allowed examining cue-related anticipatory modulation of activity as a function of prior probability in category-specific areas (fusiform face area and parahippocampal place area). Our findings show that activity in the fusiform face area was higher when faces had a higher prior probability. The finding of a difference between levels of expectations is consistent with graded, probabilistically modulated activity, but the data do not rule out the alternative explanation of a categorical neural response. Importantly, these differences were only visible during anticipation, and vanished at the time of stimulus presentation, calling for a functional distinction when considering the effects of prior probability. Finally, there were no anticipatory effects for houses in the parahippocampal place area, suggesting sensitivity to stimulus material when looking at effects of prediction.
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Neutrino mass priors for cosmology from random matrices
NASA Astrophysics Data System (ADS)
Long, Andrew J.; Raveri, Marco; Hu, Wayne; Dodelson, Scott
2018-02-01
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σ mν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π (Σ mν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix Mν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over Mν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σ mν that we interpret as a Bayesian prior probability π (Σ mν). Assuming a basis-invariant probability distribution on Mν, also known as the anarchy hypothesis, we find that π (Σ mν) peaks close to the smallest Σ mν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π (Σ mν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for π (Σ mν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Bayesian nonparametric regression with varying residual density
Pati, Debdeep; Dunson, David B.
2013-01-01
We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053
Internal strains after recovery of hardness in tempered martensitic steels for fusion reactors
NASA Astrophysics Data System (ADS)
Brunelli, L.; Gondi, P.; Montanari, R.; Coppola, R.
1991-03-01
After tempering, with recovery of hardness, MANET steels present internal strains; these residual strains increase with quenching rate prior to tempering, and they remain after prolonged tempering times. On account of their persistence, after thermal treatments which lead to low dislocation and sub-boundary densities, the possibility has been considered that the high swelling resistance of MANET is connected with these centres of strain, probably connected with the formation, in ferrite, of Cr-enriched and contiguous Cr-depleted zones which may act as sinks for interstitials. Comparative observations on the internal strain behaviour of cold worked 316L stainless steel appear consistent with this possibility.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Neutrino mass priors for cosmology from random matrices
Long, Andrew J.; Raveri, Marco; Hu, Wayne; ...
2018-02-13
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σm ν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π(Σm ν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix M ν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution overmore » M ν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σm ν that we interpret as a Bayesian prior probability π(Σm ν). Assuming a basis-invariant probability distribution on M ν, also known as the anarchy hypothesis, we find that π(Σm ν) peaks close to the smallest Σm ν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π(Σm ν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. In conclusion, we present fitting functions for π(Σm ν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.« less
Neutrino mass priors for cosmology from random matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Andrew J.; Raveri, Marco; Hu, Wayne
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σm ν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π(Σm ν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix M ν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution overmore » M ν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σm ν that we interpret as a Bayesian prior probability π(Σm ν). Assuming a basis-invariant probability distribution on M ν, also known as the anarchy hypothesis, we find that π(Σm ν) peaks close to the smallest Σm ν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π(Σm ν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. In conclusion, we present fitting functions for π(Σm ν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.« less
Colangelo, Michele; Camarero, Jesús J; Borghetti, Marco; Gazol, Antonio; Gentilesca, Tiziana; Ripullone, Francesco
2017-01-01
Hydraulic theory suggests that tall trees are at greater risk of drought-triggered death caused by hydraulic failure than small trees. In addition the drop in growth, observed in several tree species prior to death, is often interpreted as an early-warning signal of impending death. We test these hypotheses by comparing size, growth, and wood-anatomy patterns of living and now-dead trees in two Italian oak forests showing recent mortality episodes. The mortality probability of trees is modeled as a function of recent growth and tree size. Drift-diffusion-jump (DDJ) metrics are used to detect early-warning signals. We found that the tallest trees of the anisohydric Italian oak better survived drought contrary to what was predicted by the theory. Dead trees were characterized by a lower height and radial-growth trend than living trees in both study sites. The growth reduction of now-dead trees started about 10 years prior to their death and after two severe spring droughts during the early 2000s. This critical transition in growth was detected by DDJ metrics in the most affected site. Dead trees were also more sensitive to drought stress in this site indicating different susceptibility to water shortage between trees. Dead trees did not form earlywood vessels with smaller lumen diameter than surviving trees but tended to form wider latewood vessels with a higher percentage of vessel area. Since living and dead trees showed similar competition we did not expect that moderate thinning and a reduction in tree density would increase the short-term survival probability of trees.
Self-calibration of photometric redshift scatter in weak-lensing surveys
Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary
2010-06-11
Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude
Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander
2016-01-01
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A
2016-04-01
A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.
Nallar, Rodolfo; Papp, Zsuzsanna; Leighton, Frederick A; Epp, Tasha; Pasick, John; Berhane, Yohannes; Lindsay, Robbin; Soos, Catherine
2016-01-01
The Canadian prairies are one of the most important breeding and staging areas for migratory waterfowl in North America. Hundreds of thousands of waterfowl of numerous species from multiple flyways converge in and disperse from this region annually; therefore this region may be a key area for potential intra- and interspecific spread of infectious pathogens among migratory waterfowl in the Americas. Using Blue-winged Teal (Anas discors, BWTE), which have the most extensive migratory range among waterfowl species, we investigated ecologic risk factors for infection and antibody status to avian influenza virus (AIV), West Nile virus (WNV), and avian paramyxovirus-1 (APMV-1) in the three prairie provinces (Alberta, Saskatchewan, and Manitoba) prior to fall migration. We used generalized linear models to examine infection or evidence of exposure in relation to host (age, sex, body condition, exposure to other infections), spatiotemporal (year, province), population-level (local population densities of BWTE, total waterfowl densities), and environmental (local pond densities) factors. The probability of AIV infection in BWTE was associated with host factors (e.g., age and antibody status), population-level factors (e.g., local BWTE population density), and year. An interaction between age and AIV antibody status showed that hatch year birds with antibodies to AIV were more likely to be infected, suggesting an antibody response to an active infection. Infection with AIV was positively associated with local BWTE density, supporting the hypothesis of density-dependent transmission. The presence of antibodies to WNV and APMV-1 was positively associated with age and varied among years. Furthermore, the probability of being WNV antibody positive was positively associated with pond density rather than host population density, likely because ponds provide suitable breeding habitat for mosquitoes, the primary vectors for transmission. Our findings highlight the importance of spatiotemporal, environmental, and host factors at the individual and population levels, all of which may influence dynamics of these and other viruses in wild waterfowl populations.
Dos Reis, Mario
2016-07-19
Constructing a multi-dimensional prior on the times of divergence (the node ages) of species in a phylogeny is not a trivial task, in particular, if the prior density is the result of combining different sources of information such as a speciation process with fossil calibration densities. Yang & Rannala (2006 Mol. Biol. Evol 23, 212-226. (doi:10.1093/molbev/msj024)) laid out the general approach to combine the birth-death process with arbitrary fossil-based densities to construct a prior on divergence times. They achieved this by calculating the density of node ages without calibrations conditioned on the ages of the calibrated nodes. Here, I show that the conditional density obtained by Yang & Rannala is misspecified. The misspecified density can sometimes be quite strange-looking and can lead to unintentionally informative priors on node ages without fossil calibrations. I derive the correct density and provide a few illustrative examples. Calculation of the density involves a sum over a large set of labelled histories, and so obtaining the density in a computer program seems hard at the moment. A general algorithm that may provide a way forward is given.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Benford's law and the FSD distribution of economic behavioral micro data
NASA Astrophysics Data System (ADS)
Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George
2017-11-01
In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.
Planet-wide sand motion on mars
Bridges, N.T.; Bourke, M.C.; Geissler, P.E.; Banks, M.E.; Colon, C.; Diniega, S.; Golombek, M.P.; Hansen, C.J.; Mattson, S.; McEwen, A.S.; Mellon, M.T.; Stantzos, N.; Thomson, B.J.
2012-01-01
Prior to Mars Reconnaissance Orbiter data, images of Mars showed no direct evidence for dune and ripple motion. This was consistent with climate models and lander measurements indicating that winds of sufficient intensity to mobilize sand were rare in the low-density atmosphere. We show that many sand ripples and dunes across Mars exhibit movement of as much as a few meters per year, demonstrating that Martian sand migrates under current conditions in diverse areas of the planet. Most motion is probably driven by wind gusts that are not resolved in global circulation models. A past climate with a thicker atmosphere is only required to move large ripples that contain coarse grains. ?? 2012 Geological Society of America.
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
Colangelo, Michele; Camarero, Jesús J.; Borghetti, Marco; Gazol, Antonio; Gentilesca, Tiziana; Ripullone, Francesco
2017-01-01
Hydraulic theory suggests that tall trees are at greater risk of drought-triggered death caused by hydraulic failure than small trees. In addition the drop in growth, observed in several tree species prior to death, is often interpreted as an early-warning signal of impending death. We test these hypotheses by comparing size, growth, and wood-anatomy patterns of living and now-dead trees in two Italian oak forests showing recent mortality episodes. The mortality probability of trees is modeled as a function of recent growth and tree size. Drift-diffusion-jump (DDJ) metrics are used to detect early-warning signals. We found that the tallest trees of the anisohydric Italian oak better survived drought contrary to what was predicted by the theory. Dead trees were characterized by a lower height and radial-growth trend than living trees in both study sites. The growth reduction of now-dead trees started about 10 years prior to their death and after two severe spring droughts during the early 2000s. This critical transition in growth was detected by DDJ metrics in the most affected site. Dead trees were also more sensitive to drought stress in this site indicating different susceptibility to water shortage between trees. Dead trees did not form earlywood vessels with smaller lumen diameter than surviving trees but tended to form wider latewood vessels with a higher percentage of vessel area. Since living and dead trees showed similar competition we did not expect that moderate thinning and a reduction in tree density would increase the short-term survival probability of trees. PMID:28270816
Nonlinear Spatial Inversion Without Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Curtis, A.; Nawaz, A.
2017-12-01
High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
NASA Astrophysics Data System (ADS)
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
Fanshawe, T. R.
2015-01-01
There are many examples from the scientific literature of visual search tasks in which the length, scope and success rate of the search have been shown to vary according to the searcher's expectations of whether the search target is likely to be present. This phenomenon has major practical implications, for instance in cancer screening, when the prevalence of the condition is low and the consequences of a missed disease diagnosis are severe. We consider this problem from an empirical Bayesian perspective to explain how the effect of a low prior probability, subjectively assessed by the searcher, might impact on the extent of the search. We show how the searcher's posterior probability that the target is present depends on the prior probability and the proportion of possible target locations already searched, and also consider the implications of imperfect search, when the probability of false-positive and false-negative decisions is non-zero. The theoretical results are applied to two studies of radiologists' visual assessment of pulmonary lesions on chest radiographs. Further application areas in diagnostic medicine and airport security are also discussed. PMID:26587267
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
NASA Technical Reports Server (NTRS)
Backus, George E.
1999-01-01
The purpose of the grant was to study how prior information about the geomagnetic field can be used to interpret surface and satellite magnetic measurements, to generate quantitative descriptions of prior information that might be so used, and to use this prior information to obtain from satellite data a model of the core field with statistically justifiable error estimates. The need for prior information in geophysical inversion has long been recognized. Data sets are finite, and faithful descriptions of aspects of the earth almost always require infinite-dimensional model spaces. By themselves, the data can confine the correct earth model only to an infinite-dimensional subset of the model space. Earth properties other than direct functions of the observed data cannot be estimated from those data without prior information about the earth. Prior information is based on what the observer already knows before the data become available. Such information can be "hard" or "soft". Hard information is a belief that the real earth must lie in some known region of model space. For example, the total ohmic dissipation in the core is probably less that the total observed geothermal heat flow out of the earth's surface. (In principle, ohmic heat in the core can be recaptured to help drive the dynamo, but this effect is probably small.) "Soft" information is a probability distribution on the model space, a distribution that the observer accepts as a quantitative description of her/his beliefs about the earth. The probability distribution can be a subjective prior in the sense of Bayes or the objective result of a statistical study of previous data or relevant theories.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Statistical characterization of portal images and noise from portal imaging systems.
González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge
2013-06-01
In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.
Flight and ground tests of a very low density elastomeric ablative material
NASA Technical Reports Server (NTRS)
Olsen, G. C.; Chapman, A. J., III
1972-01-01
A very low density ablative material, a silicone-phenolic composite, was flight tested on a recoverable spacecraft launched by a Pacemaker vehicle system; and, in addition, it was tested in an arc heated wind tunnel at three conditions which encompassed most of the reentry heating conditions of the flight tests. The material was composed, by weight, of 71 percent phenolic spheres, 22.8 percent silicone resin, 2.2 percent catalyst, and 4 percent silica fibers. The tests were conducted to evaluate the ablator performance in both arc tunnel and flight tests and to determine the predictability of the albator performance by using computed results from an existing one-dimensional numerical analysis. The flight tested ablator experienced only moderate surface recession and retained a smooth surface except for isolated areas where the char was completely removed, probably following reentry and prior to or during recovery. Analytical results show good agreement between arc tunnel and flight test results. The thermophysical properties used in the analysis are tabulated.
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
Ord's kangaroo rats living in floodplain habitats: Factors contributing to habitat attraction
Miller, M.S.; Wilson, K.R.; Andersen, D.C.
2003-01-01
High densities of an aridland granivore, Ord's kangaroo rat (Dipodomys ordii), have been documented in floodplain habitats along the Yampa River in northwestern Colorado. Despite a high probability of inundation and attendant high mortality during the spring flood period, the habitat is consistently recolonized. To understand factors that potentially make riparian habitats attractive to D. ordii, we compared density and spatial pattern of seeds, density of a competitor (western harvester ant, Pogonomyrmex occidentalis), and digging energetics within floodplain habitats and between floodplain and adjacent upland habitats. Seed density within the floodplain was greatest in the topographically high (rarely flooded) floodplain and lowest immediately after a spring flood in the topographically low (frequently flooded) floodplain. Seed densities in adjacent upland habitat that never floods were higher than the lowest floodplain habitat. In the low floodplain prior to flooding, seeds had a clumped spatial pattern, which D. ordii is adept at exploiting; after spring flooding, a more random pattern resulted. Populations of the western harvester ant were low in the floodplain relative to the upland. Digging by D. ordii was energetically less expensive in floodplain areas than in upland areas. Despite the potential for mortality due to annual spring flooding, the combination of less competition from harvester ants and lower energetic costs of digging might promote the use of floodplain habitat by D. ordii.
Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function
ERIC Educational Resources Information Center
Fennell, John; Baddeley, Roland
2012-01-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
De-blending deep Herschel surveys: A multi-wavelength approach
NASA Astrophysics Data System (ADS)
Pearson, W. J.; Wang, L.; van der Tak, F. F. S.; Hurley, P. D.; Burgarella, D.; Oliver, S. J.
2017-07-01
Aims: Cosmological surveys in the far-infrared are known to suffer from confusion. The Bayesian de-blending tool, XID+, currently provides one of the best ways to de-confuse deep Herschel SPIRE images, using a flat flux density prior. This work is to demonstrate that existing multi-wavelength data sets can be exploited to improve XID+ by providing an informed prior, resulting in more accurate and precise extracted flux densities. Methods: Photometric data for galaxies in the COSMOS field were used to constrain spectral energy distributions (SEDs) using the fitting tool CIGALE. These SEDs were used to create Gaussian prior estimates in the SPIRE bands for XID+. The multi-wavelength photometry and the extracted SPIRE flux densities were run through CIGALE again to allow us to compare the performance of the two priors. Inferred ALMA flux densities (FinferALMA), at 870 μm and 1250 μm, from the best fitting SEDs from the second CIGALE run were compared with measured ALMA flux densities (FmeasALMA) as an independent performance validation. Similar validations were conducted with the SED modelling and fitting tool MAGPHYS and modified black-body functions to test for model dependency. Results: We demonstrate a clear improvement in agreement between the flux densities extracted with XID+ and existing data at other wavelengths when using the new informed Gaussian prior over the original uninformed prior. The residuals between FmeasALMA and FinferALMA were calculated. For the Gaussian priors these residuals, expressed as a multiple of the ALMA error (σ), have a smaller standard deviation, 7.95σ for the Gaussian prior compared to 12.21σ for the flat prior; reduced mean, 1.83σ compared to 3.44σ; and have reduced skew to positive values, 7.97 compared to 11.50. These results were determined to not be significantly model dependent. This results in statistically more reliable SPIRE flux densities and hence statistically more reliable infrared luminosity estimates. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Earth resources data analysis program, phase 2
NASA Technical Reports Server (NTRS)
1974-01-01
The efforts and findings of the Earth Resources Data Analysis Program are summarized. Results of a detailed study of the needs of EOD with respect to an applications development system (ADS) for the analysis of remotely sensed data, including an evaluation of four existing systems with respect to these needs are described. Recommendations as to possible courses for EOD to follow to obtain a viable ADS are presented. Algorithmic development comprised of several subtasks is discussed. These subtasks include the following: (1) two algorithms for multivariate density estimation; (2) a data smoothing algorithm; (3) a method for optimally estimating prior probabilities of unclassified data; and (4) further applications of the modified Cholesky decomposition in various calculations. Little effort was expended on task 3, however, two reports were reviewed.
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Random walks with shape prior for cochlea segmentation in ex vivo μCT.
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel
2016-09-01
Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.
ERIC Educational Resources Information Center
Satake, Eiki; Amato, Philip P.
2008-01-01
This paper presents an alternative version of formulas of conditional probabilities and Bayes' rule that demonstrate how the truth table of elementary mathematical logic applies to the derivations of the conditional probabilities of various complex, compound statements. This new approach is used to calculate the prior and posterior probabilities…
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
Use of prior odds for missing persons identifications.
Budowle, Bruce; Ge, Jianye; Chakraborty, Ranajit; Gill-King, Harrell
2011-06-27
Identification of missing persons from mass disasters is based on evaluation of a number of variables and observations regarding the combination of features derived from these variables. DNA typing now is playing a more prominent role in the identification of human remains, and particularly so for highly decomposed and fragmented remains. The strength of genetic associations, by either direct or kinship analyses, is often quantified by calculating a likelihood ratio. The likelihood ratio can be multiplied by prior odds based on nongenetic evidence to calculate the posterior odds, that is, by applying Bayes' Theorem, to arrive at a probability of identity. For the identification of human remains, the path creating the set and intersection of variables that contribute to the prior odds needs to be appreciated and well defined. Other than considering the total number of missing persons, the forensic DNA community has been silent on specifying the elements of prior odds computations. The variables include the number of missing individuals, eyewitness accounts, anthropological features, demographics and other identifying characteristics. The assumptions, supporting data and reasoning that are used to establish a prior probability that will be combined with the genetic data need to be considered and justified. Otherwise, data may be unintentionally or intentionally manipulated to achieve a probability of identity that cannot be supported and can thus misrepresent the uncertainty with associations. The forensic DNA community needs to develop guidelines for objectively computing prior odds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Strong profiling is not mathematically optimal for discovering rare malfeasors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Press, William H
2008-01-01
In a large population of individuals labeled j = 1,2,...,N, governments attempt to find the rare malfeasor j = j, (terrorist, for example) by making use of priors p{sub j} that estimate the probability of individual j being a malfeasor. Societal resources for secondary random screening such as airport search or police investigation are concentrated against individuals with the largest priors. They may call this 'strong profiling' if the concentration is at least proportional to p{sub j} for the largest values. Strong profiling often results in higher probability, but otherwise innocent, individuals being repeatedly subjected to screening. They show heremore » that, entirely apart from considerations of social policy, strong profiling is not mathematically optimal at finding malfeasors. Even if prior probabilities were accurate, their optimal use would be only as roughly the geometric mean between a strong profiling and a completely uniform sampling of the population.« less
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Code of Federal Regulations, 2014 CFR
2014-07-01
... effort to assess the probability of future behavior which could have an effect adverse to the national... prior experience with similar cases, reasonably suggest a degree of probability of prejudicial behavior...
Code of Federal Regulations, 2013 CFR
2013-07-01
... effort to assess the probability of future behavior which could have an effect adverse to the national... prior experience with similar cases, reasonably suggest a degree of probability of prejudicial behavior...
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Wada, Tetsuo
Despite many empirical studies having been carried out on examiner patent citations, few have scrutinized the obstacles to prior art searching when adding patent citations during patent prosecution at patent offices. This analysis takes advantage of the longitudinal gap between an International Search Report (ISR) as required by the Patent Cooperation Treaty (PCT) and subsequent national examination procedures. We investigate whether several kinds of distance actually affect the probability that prior art is detected at the time of an ISR; this occurs much earlier than in national phase examinations. Based on triadic PCT applications between 2002 and 2005 for the trilateral patent offices (the European Patent Office, the US Patent and Trademark Office, and the Japan Patent Office) and their family-level citations made by the trilateral offices, we find evidence that geographical distance negatively affects the probability of capture of prior patents in an ISR. In addition, the technological complexity of an application negatively affects the probability of capture, whereas the volume of forward citations of prior art affects it positively. These results demonstrate the presence of obstacles to searching at patent offices, and suggest ways to design work sharing by patent offices, such that the duplication of search costs arises only when patent office search horizons overlap.
Calibrated birth-death phylogenetic time-tree priors for bayesian inference.
Heled, Joseph; Drummond, Alexei J
2015-05-01
Here we introduce a general class of multiple calibration birth-death tree priors for use in Bayesian phylogenetic inference. All tree priors in this class separate ancestral node heights into a set of "calibrated nodes" and "uncalibrated nodes" such that the marginal distribution of the calibrated nodes is user-specified whereas the density ratio of the birth-death prior is retained for trees with equal values for the calibrated nodes. We describe two formulations, one in which the calibration information informs the prior on ranked tree topologies, through the (conditional) prior, and the other which factorizes the prior on divergence times and ranked topologies, thus allowing uniform, or any arbitrary prior distribution on ranked topologies. Although the first of these formulations has some attractive properties, the algorithm we present for computing its prior density is computationally intensive. However, the second formulation is always faster and computationally efficient for up to six calibrations. We demonstrate the utility of the new class of multiple-calibration tree priors using both small simulations and a real-world analysis and compare the results to existing schemes. The two new calibrated tree priors described in this article offer greater flexibility and control of prior specification in calibrated time-tree inference and divergence time dating, and will remove the need for indirect approaches to the assessment of the combined effect of calibration densities and tree priors in Bayesian phylogenetic inference. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Updating: Learning versus Supposing
ERIC Educational Resources Information Center
Zhao, Jiaying; Crupi, Vincenzo; Tentori, Katya; Fitelson, Branden; Osherson, Daniel
2012-01-01
Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event "A" after learning "B" should equal the conditional probability of "A" given "B" prior to learning "B". We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial…
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
NASA Astrophysics Data System (ADS)
Al-Mudhafar, W. J.
2013-12-01
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.
NASA Astrophysics Data System (ADS)
Kennett, Shane C.
Three low-carbon ASTM A514 microalloyed steels were used to assess the effects of austenite conditioning on the microstructure and mechanical properties of martensite. A range of prior austenite grain sizes with and without thermomechanical processing were produced in a Gleeble RTM 3500 and direct-quenched. Samples in the as-quenched, low temperature tempered, and high temperature tempered conditions were studied. The microstructure was characterized with scanning electron microscopy, electron backscattered diffraction, transmission electron microscopy, and x-ray diffraction. The uniaxial tensile properties and Charpy V-notch properties were measured and compared with the microstructural features (prior austenite grain size, packet size, block size, lath boundaries, and dislocation density). For the equiaxed prior austenite grain conditions, prior austenite grain size refinement decreases the packet size, decreases the block size, and increases the dislocation density of as-quenched martensite. However, after high temperature tempering the dislocation density decreases with prior austenite grain size refinement. Thermomechanical processing increases the low angle substructure, increases the dislocation density, and decreases the block size of as-quenched martensite. The dislocation density increase and block size refinement is sensitive to the austenite grain size before ausforming. The larger prior austenite grain size conditions have a larger increase in dislocation density, but the small prior austenite grain size conditions have the largest refinement in block size. Additionally, for the large prior austenite grain size conditions, the packet size increases with thermomechanical processing. The strength of martensite is often related to an effective grain size or carbon concentration. For the current work, it was concluded that the strength of martensite is primarily controlled by the dislocation density and dislocation substructure; which is related to a grain size and carbon concentration. In the microyielding regime, the strength and work hardening is related to the motion of unpinned dislocation segments. However, with tensile strain, a dislocation cell structure is developed and the flow strength (greater than 1% offset) is controlled by the dislocation density following a Taylor hardening model, thereby ruling out any grain size effects on the flow strength. Additionally, it is proposed that lath boundaries contribute to strength. It is shown that the strength differences associated with thermomechanically processed steels can be fully accounted for by dislocation density differences and the effect of lath boundaries. The low temperature ductile to brittle transition of martensite is controlled by the martensite block size, packet size, and prior austenite grain size. However, the effect of block size is likely small in comparison. The ductile to brittle transition temperature is best correlated to the inverse square root of the martensite packet size because large crack deflections are typical at packet boundaries.
A Bayesian approach to the modelling of α Cen A
NASA Astrophysics Data System (ADS)
Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.
2012-12-01
Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.
Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations
ERIC Educational Resources Information Center
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon
2018-01-01
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
[MINERAL BONE DENSITY AND BODY COMPOSITION IN PARTICIPANTS IN EXPERIMENT MARS-500].
Novikov, V E; Oganov, V S; Kabitskaya, O E; Murashko, L M; Naidina, V P; Chernikhova, E A
2016-01-01
Investigations of the bone system and body composition in Mars-500 test-subjects (prior to and on completion of the experiment) involved dual-energy X-ray absorptiometry (DXA) using the HOLOGIC Delphy densitometer and the protocol performed to examine cosmonauts. Bone density of lumber vertebrae and femoral proximal epiphysis, and body composition were measured. Reliable changes in vertebral density found in 3 test-subjects displayed different trends from +2.6 to -2.4%. At the same time, the experiment decreased significantly mineral density of the femoral proximal epiphysis, including the neck, in all test-subjects. Four test-subjects had cranial mineralization increased by 5-9%, same as in some cosmonauts after space flight. All tests-subjects incurred adipose loss from 2 to 7 kg; one test-subject lost 20 kg, i.e. his adipose mass became three times less. Changes in lean mass (1-3 kg) typically were negative; as for changes in lean mass of extremities, they could be linked with adherence to one or another type of physical activity. Therefore, extended exposure to confinement may affect mineralization of some parts of the skeleton. Unlike real space missions and long-term bedrest studies conducted at the Institute of Biomedical Problems in the past, Mars-500 did not cause clinically significant mineral losses (osteoporosis, osteopenia), probably because of the absence of effects of microgravity.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
The Chandra Source Catalog: X-ray Aperture Photometry
NASA Astrophysics Data System (ADS)
Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
The Chandra Source Catalog represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect, and source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it. We also determine limiting sensitivities at arbitrary locations in the field using the same formulation. This work was supported by CXC NASA contracts NAS8-39073 (VK) and NAS8-03060 (CSC).
RadVel: The Radial Velocity Modeling Toolkit
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-04-01
RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Khawaji, M; Astermark, J; Akesson, K; Berntorp, E
2010-05-01
Physical activity has been considered as an important factor for bone density and as a factor facilitating prevention of osteoporosis. Bone density has been reported to be reduced in haemophilia. To examine the relation between different aspects of physical activity and bone mineral density (BMD) in patients with severe haemophilia on long-term prophylaxis. The study group consisted of 38 patients with severe haemophilia (mean age 30.5 years). All patients received long-term prophylaxis to prevent bleeding. The bone density (BMD g cm(-2)) of the total body, lumbar spine, total hip, femoral neck and trochanter was measured by dual energy X-ray absorptiometry. Physical activity was assessed using the self-report Modifiable Activity Questionnaire, an instrument which collects information about leisure and occupational activities for the prior 12 months. There was only significant correlation between duration and intensity of vigorous physical activity and bone density at lumber spine L1-L4; for duration (r = 0.429 and P = 0.020) and for intensity (r = 0.430 and P = 0.019); whereas no significant correlation between all aspects of physical activity and bone density at any other measured sites. With adequate long-term prophylaxis, adult patients with haemophilia are maintaining bone mass, whereas the level of physical activity in terms of intensity and duration play a minor role. These results may support the proposition that the responsiveness to mechanical strain is probably more important for bone mass development in children and during adolescence than in adults and underscores the importance of early onset prophylaxis.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
THE BOLOCAM GALACTIC PLANE SURVEY. VIII. A MID-INFRARED KINEMATIC DISTANCE DISCRIMINATION METHOD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellsworth-Bowers, Timothy P.; Glenn, Jason; Battersby, Cara
2013-06-10
We present a new distance estimation method for dust-continuum-identified molecular cloud clumps. Recent (sub-)millimeter Galactic plane surveys have cataloged tens of thousands of these objects, plausible precursors to stellar clusters, but detailed study of their physical properties requires robust distance determinations. We derive Bayesian distance probability density functions (DPDFs) for 770 objects from the Bolocam Galactic Plane Survey in the Galactic longitude range 7. Degree-Sign 5 {<=} l {<=} 65 Degree-Sign . The DPDF formalism is based on kinematic distances, and uses any number of external data sets to place prior distance probabilities to resolve the kinematic distance ambiguity (KDA)more » for objects in the inner Galaxy. We present here priors related to the mid-infrared absorption of dust in dense molecular regions and the distribution of molecular gas in the Galactic disk. By assuming a numerical model of Galactic mid-infrared emission and simple radiative transfer, we match the morphology of (sub-)millimeter thermal dust emission with mid-infrared absorption to compute a prior DPDF for distance discrimination. Selecting objects first from (sub-)millimeter source catalogs avoids a bias towards the darkest infrared dark clouds (IRDCs) and extends the range of heliocentric distance probed by mid-infrared extinction and includes lower-contrast sources. We derive well-constrained KDA resolutions for 618 molecular cloud clumps, with approximately 15% placed at or beyond the tangent distance. Objects with mid-infrared contrast sufficient to be cataloged as IRDCs are generally placed at the near kinematic distance. Distance comparisons with Galactic Ring Survey KDA resolutions yield a 92% agreement. A face-on view of the Milky Way using resolved distances reveals sections of the Sagittarius and Scutum-Centaurus Arms. This KDA-resolution method for large catalogs of sources through the combination of (sub-)millimeter and mid-infrared observations of molecular cloud clumps is generally applicable to other dust-continuum Galactic plane surveys.« less
Stauffer, Glenn E.; Rotella, Jay J.; Garrott, Robert A.; Kendall, William L.
2014-01-01
In colonial-breeding species, prebreeders often emigrate temporarily from natal reproductive colonies then subsequently return for one or more years before producing young. Variation in attendance–nonattendance patterns can have implications for subsequent recruitment. We used open robust-design multistate models and 28 years of encounter data for prebreeding female Weddell seals (Leptonychotes weddellii [Lesson]) to evaluate hypotheses about (1) the relationships of temporary emigration (TE) probabilities to environmental and population size covariates and (2) motivations for attendance and consequences of nonattendance for subsequent probability of recruitment to the breeding population. TE probabilities were density dependent (βˆBPOP = 0.66, = 0.17; estimated effects [β] and standard errors of population size in the previous year) and increased when the fast-ice edge was distant from the breeding colonies (βˆDIST = 0.75, = 0.04; estimated effects and standard errors of distance to the sea-ice edge in the current year on TE probability in the current year) and were strongly age and state dependent. These results suggest that trade-offs between potential benefits and costs of colony attendance vary annually and might influence motivation to attend colonies. Recruitment probabilities were greatest for seals that consistently attended colonies in two or more years (e.g., = 0.56, SD = 0.17) and lowest for seals that never or inconsistently attended prior to recruitment (e.g., = 0.32, SD = 0.15), where denotes the mean recruitment probability (over all years) for 10-year-old seals for the specified prebreeder state. In colonial-breeding seabirds, repeated colony attendance increases subsequent probability of recruitment to the adult breeding population; our results suggest similar implications for a marine mammal and are consistent with the hypothesis that prebreeders were motivated to attend reproductive colonies to gain reproductive skills or perhaps to optimally synchronize estrus through close association with mature breeding females.
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
A Learning-Based Approach for IP Geolocation
NASA Astrophysics Data System (ADS)
Eriksson, Brian; Barford, Paul; Sommers, Joel; Nowak, Robert
The ability to pinpoint the geographic location of IP hosts is compelling for applications such as on-line advertising and network attack diagnosis. While prior methods can accurately identify the location of hosts in some regions of the Internet, they produce erroneous results when the delay or topology measurement on which they are based is limited. The hypothesis of our work is that the accuracy of IP geolocation can be improved through the creation of a flexible analytic framework that accommodates different types of geolocation information. In this paper, we describe a new framework for IP geolocation that reduces to a machine-learning classification problem. Our methodology considers a set of lightweight measurements from a set of known monitors to a target, and then classifies the location of that target based on the most probable geographic region given probability densities learned from a training set. For this study, we employ a Naive Bayes framework that has low computational complexity and enables additional environmental information to be easily added to enhance the classification process. To demonstrate the feasibility and accuracy of our approach, we test IP geolocation on over 16,000 routers given ping measurements from 78 monitors with known geographic placement. Our results show that the simple application of our method improves geolocation accuracy for over 96% of the nodes identified in our data set, with on average accuracy 70 miles closer to the true geographic location versus prior constraint-based geolocation. These results highlight the promise of our method and indicate how future expansion of the classifier can lead to further improvements in geolocation accuracy.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals
Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E
2018-01-01
Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
Attentional and Contextual Priors in Sound Perception.
Wolmetz, Michael; Elhilali, Mounya
2016-01-01
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
Attentional and Contextual Priors in Sound Perception
Wolmetz, Michael; Elhilali, Mounya
2016-01-01
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis. PMID:26882228
The impossibility of probabilities
NASA Astrophysics Data System (ADS)
Zimmerman, Peter D.
2017-11-01
This paper discusses the problem of assigning probabilities to the likelihood of nuclear terrorism events, in particular examining the limitations of using Bayesian priors for this purpose. It suggests an alternate approach to analyzing the threat of nuclear terrorism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi
2013-11-01
In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.
Smith, Lindsey A; McMahon, Lori L
2018-02-01
Alzheimer's disease (AD) pathology begins decades prior to onset of clinical symptoms, and the entorhinal cortex and hippocampus are among the first and most extensively impacted brain regions. The TgF344-AD rat model, which more fully recapitulates human AD pathology in an age-dependent manner, is a next generation preclinical rodent model for understanding pathophysiological processes underlying the earliest stages of AD (Cohen et al., 2013). Whether synaptic alterations occur in hippocampus prior to reported learning and memory deficit is not known. Furthermore, it is not known if specific hippocampal synapses are differentially affected by progressing AD pathology, or if synaptic deficits begin to appear at the same age in males and females in this preclinical model. Here, we investigated the time-course of synaptic changes in basal transmission, paired-pulse ratio, as an indirect measure of presynaptic release probability, long-term potentiation (LTP), and dendritic spine density at two hippocampal synapses in male and ovariectomized female TgF344-AD rats and wildtype littermates, prior to reported behavioral deficits. Decreased basal synaptic transmission begins at medial perforant path-dentate granule cell (MPP-DGC) synapses prior to Schaffer-collateral-CA1 (CA3-CA1) synapses, in the absence of a change in paired-pulse ratio (PPR) or dendritic spine density. N-methyl-d-aspartate receptor (NMDAR)-dependent LTP magnitude is unaffected at CA3-CA1 synapses at 6, 9, and 12months of age, but is significantly increased at MPP-DGC synapses in TgF344-AD rats at 6months only. Sex differences were only observed at CA3-CA1 synapses where the decrease in basal transmission occurs at a younger age in males versus females. These are the first studies to define presymptomatic alterations in hippocampal synaptic transmission in the TgF344-AD rat model. The time course of altered synaptic transmission mimics the spread of pathology through hippocampus in human AD and provides support for this model as a valuable preclinical tool in elucidating pathological mechanisms of early synapse dysfunction in AD. Copyright © 2017. Published by Elsevier Inc.
Does prediction error drive one-shot declarative learning?
Greve, Andrea; Cooper, Elisa; Kaula, Alexander; Anderson, Michael C; Henson, Richard
2017-06-01
The role of prediction error (PE) in driving learning is well-established in fields such as classical and instrumental conditioning, reward learning and procedural memory; however, its role in human one-shot declarative encoding is less clear. According to one recent hypothesis, PE reflects the divergence between two probability distributions: one reflecting the prior probability (from previous experiences) and the other reflecting the sensory evidence (from the current experience). Assuming unimodal probability distributions, PE can be manipulated in three ways: (1) the distance between the mode of the prior and evidence, (2) the precision of the prior, and (3) the precision of the evidence. We tested these three manipulations across five experiments, in terms of peoples' ability to encode a single presentation of a scene-item pairing as a function of previous exposures to that scene and/or item. Memory was probed by presenting the scene together with three choices for the previously paired item, in which the two foil items were from other pairings within the same condition as the target item. In Experiment 1, we manipulated the evidence to be either consistent or inconsistent with prior expectations, predicting PE to be larger, and hence memory better, when the new pairing was inconsistent. In Experiments 2a-c, we manipulated the precision of the priors, predicting better memory for a new pairing when the (inconsistent) priors were more precise. In Experiment 3, we manipulated both visual noise and prior exposure for unfamiliar faces, before pairing them with scenes, predicting better memory when the sensory evidence was more precise. In all experiments, the PE hypotheses were supported. We discuss alternative explanations of individual experiments, and conclude the Predictive Interactive Multiple Memory Signals (PIMMS) framework provides the most parsimonious account of the full pattern of results.
Hossain, Monir; Wright, Steven; Petersen, Laura A
2002-04-01
One way to monitor patient access to emergent health care services is to use patient characteristics to predict arrival time at the hospital after onset of symptoms. This predicted arrival time can then be compared with actual arrival time to allow monitoring of access to services. Predicted arrival time could also be used to estimate potential effects of changes in health care service availability, such as closure of an emergency department or an acute care hospital. Our goal was to determine the best statistical method for prediction of arrival intervals for patients with acute myocardial infarction (AMI) symptoms. We compared the performance of multinomial logistic regression (MLR) and discriminant analysis (DA) models. Models for MLR and DA were developed using a dataset of 3,566 male veterans hospitalized with AMI in 81 VA Medical Centers in 1994-1995 throughout the United States. The dataset was randomly divided into a training set (n = 1,846) and a test set (n = 1,720). Arrival times were grouped into three intervals on the basis of treatment considerations: <6 hours, 6-12 hours, and >12 hours. One model for MLR and two models for DA were developed using the training dataset. One DA model had equal prior probabilities, and one DA model had proportional prior probabilities. Predictive performance of the models was compared using the test (n = 1,720) dataset. Using the test dataset, the proportions of patients in the three arrival time groups were 60.9% for <6 hours, 10.3% for 6-12 hours, and 28.8% for >12 hours after symptom onset. Whereas the overall predictive performance by MLR and DA with proportional priors was higher, the DA models with equal priors performed much better in the smaller groups. Correct classifications were 62.6% by MLR, 62.4% by DA using proportional prior probabilities, and 48.1% using equal prior probabilities of the groups. The misclassifications by MLR for the three groups were 9.5%, 100.0%, 74.2% for each time interval, respectively. Misclassifications by DA models were 9.8%, 100.0%, and 74.4% for the model with proportional priors and 47.6%, 79.5%, and 51.0% for the model with equal priors. The choice of MLR or DA with proportional priors, or DA with equal priors for monitoring time intervals of predicted hospital arrival time for a population should depend on the consequences of misclassification errors.
Suzuki, Teppei; Tani, Yuji; Ogasawara, Katsuhiko
2016-07-25
Consistent with the "attention, interest, desire, memory, action" (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. In the case of the keyword "clinic name," the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword "clinic name and regional name," the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword "clinic name + medical examination," the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords "mammography screening," the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis.
Tani, Yuji
2016-01-01
Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the keyword “clinic name and regional name,” the probability for a repeated visit to the website and the mammography screening page was negative. In the case of the keyword “clinic name + medical examination,” the visit probability to the website was positive, and the visit probability to the information page was negative. When visitors referred to the keywords “mammography screening,” the visit probability to the mammography screening page was positive (95% highest posterior density interval = 3.38-26.66). Conclusions Further analysis for not only the clinic website but also various other medical institution websites is necessary to build a general inspection model for medical institution websites; we want to consider this in future research. Additionally, we hope to use the results obtained in this study as a prior distribution for future work to conduct higher-precision analysis. PMID:27457537
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images
Srivastava, Anuj
2010-01-01
We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692
The neural basis of belief updating and rational decision making
Achtziger, Anja; Hügelschäfer, Sabine; Steinhauser, Marco
2014-01-01
Rational decision making under uncertainty requires forming beliefs that integrate prior and new information through Bayes’ rule. Human decision makers typically deviate from Bayesian updating by either overweighting the prior (conservatism) or overweighting new information (e.g. the representativeness heuristic). We investigated these deviations through measurements of electrocortical activity in the human brain during incentivized probability-updating tasks and found evidence of extremely early commitment to boundedly rational heuristics. Participants who overweight new information display a lower sensibility to conflict detection, captured by an event-related potential (the N2) observed around 260 ms after the presentation of new information. Conservative decision makers (who overweight prior probabilities) make up their mind before new information is presented, as indicated by the lateralized readiness potential in the brain. That is, they do not inhibit the processing of new information but rather immediately rely on the prior for making a decision. PMID:22956673
The neural basis of belief updating and rational decision making.
Achtziger, Anja; Alós-Ferrer, Carlos; Hügelschäfer, Sabine; Steinhauser, Marco
2014-01-01
Rational decision making under uncertainty requires forming beliefs that integrate prior and new information through Bayes' rule. Human decision makers typically deviate from Bayesian updating by either overweighting the prior (conservatism) or overweighting new information (e.g. the representativeness heuristic). We investigated these deviations through measurements of electrocortical activity in the human brain during incentivized probability-updating tasks and found evidence of extremely early commitment to boundedly rational heuristics. Participants who overweight new information display a lower sensibility to conflict detection, captured by an event-related potential (the N2) observed around 260 ms after the presentation of new information. Conservative decision makers (who overweight prior probabilities) make up their mind before new information is presented, as indicated by the lateralized readiness potential in the brain. That is, they do not inhibit the processing of new information but rather immediately rely on the prior for making a decision.
NASA Astrophysics Data System (ADS)
Skilling, John
2005-11-01
This tutorial gives a basic overview of Bayesian methodology, from its axiomatic foundation through the conventional development of data analysis and model selection to its rôle in quantum mechanics, and ending with some comments on inference in general human affairs. The central theme is that probability calculus is the unique language within which we can develop models of our surroundings that have predictive capability. These models are patterns of belief; there is no need to claim external reality. 1. Logic and probability 2. Probability and inference 3. Probability and model selection 4. Prior probabilities 5. Probability and frequency 6. Probability and quantum mechanics 7. Probability and fundamentalism 8. Probability and deception 9. Prediction and truth
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
NASA Astrophysics Data System (ADS)
Xu, Robert S.; Michailovich, Oleg V.; Solovey, Igor; Salama, Magdy M. A.
2010-03-01
Prostate specific antigen density is an established parameter for indicating the likelihood of prostate cancer. To this end, the size and volume of the gland have become pivotal quantities used by clinicians during the standard cancer screening process. As an alternative to manual palpation, an increasing number of volume estimation methods are based on the imagery data of the prostate. The necessity to process large volumes of such data requires automatic segmentation algorithms, which can accurately and reliably identify the true prostate region. In particular, transrectal ultrasound (TRUS) imaging has become a standard means of assessing the prostate due to its safe nature and high benefit-to-cost ratio. Unfortunately, modern TRUS images are still plagued by many ultrasound imaging artifacts such as speckle noise and shadowing, which results in relatively low contrast and reduced SNR of the acquired images. Consequently, many modern segmentation methods incorporate prior knowledge about the prostate geometry to enhance traditional segmentation techniques. In this paper, a novel approach to the problem of TRUS segmentation, particularly the definition of the prostate shape prior, is presented. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for tracking both photometric and morphological features of the prostate. In particular, the tracking of morphological features defines a novel type of "weak" shape priors. The latter acts as a regularization force, which minimally bias the segmentation procedure, while rendering the final estimate stable and robust. The value of the proposed methodology is demonstrated in a series of experiments.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Numerical optimization using flow equations.
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Numerical optimization using flow equations
NASA Astrophysics Data System (ADS)
Punk, Matthias
2014-12-01
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bayen, Ute J.; Kuhlmann, Beatrice G.
2011-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source-guessing probabilities to the perceived contingency…
Continental-scale, seasonal movements of a heterothermic migratory tree bat
Cryan, Paul M.; Stricker, Craig A.; Wunder, Michael B.
2014-01-01
Long-distance migration evolved independently in bats and unique migration behaviors are likely, but because of their cryptic lifestyles, many details remain unknown. North American hoary bats (Lasiurus cinereus cinereus) roost in trees year-round and probably migrate farther than any other bats, yet we still lack basic information about their migration patterns and wintering locations or strategies. This information is needed to better understand unprecedented fatality of hoary bats at wind turbines during autumn migration and to determine whether the species could be susceptible to an emerging disease affecting hibernating bats. Our aim was to infer probable seasonal movements of individual hoary bats to better understand their migration and seasonal distribution in North America. We analyzed the stable isotope values of non-exchangeable hydrogen in the keratin of bat hair and combined isotopic results with prior distributional information to derive relative probability density surfaces for the geographic origins of individuals. We then mapped probable directions and distances of seasonal movement. Results indicate that hoary bats summer across broad areas. In addition to assumed latitudinal migration, we uncovered evidence of longitudinal movement by hoary bats from inland summering grounds to coastal regions during autumn and winter. Coastal regions with nonfreezing temperatures may be important wintering areas for hoary bats. Hoary bats migrating through any particular area, such as a wind turbine facility in autumn, are likely to have originated from a broad expanse of summering grounds from which they have traveled in no recognizable order. Better characterizing migration patterns and wintering behaviors of hoary bats sheds light on the evolution of migration and provides context for conserving these migrants.
Warren, Tessa; Dickey, Michael Walsh; Liburd, Teljer L
2017-07-01
The rational inference, or noisy channel, account of language comprehension predicts that comprehenders are sensitive to the probabilities of different interpretations for a given sentence and adapt as these probabilities change (Gibson, Bergen & Piantadosi, 2013). This account provides an important new perspective on aphasic sentence comprehension: aphasia may increase the likelihood of sentence distortion, leading people with aphasia (PWA) to rely more on the prior probability of an interpretation and less on the form or structure of the sentence (Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2015). We report the results of a sentence-picture matching experiment that tested the predictions of the rational inference account and other current models of aphasic sentence comprehension across a variety of sentence structures. Consistent with the rational inference account, PWA showed similar sensitivity to the probability of particular kinds of form distortions as age-matched controls, yet overall their interpretations relied more on prior probability and less on sentence form. As predicted by rational inference, but not by other models of sentence comprehension in aphasia, PWA's interpretations were more faithful to the form for active and passive sentences than for direct object and prepositional object sentences. However contra rational inference, there was no evidence that individual PWA's severity of syntactic or semantic impairment predicted their sensitivity to form versus the prior probability of a sentence, as cued by semantics. These findings confirm and extend previous findings that suggest the rational inference account holds promise for explaining aphasic and neurotypical comprehension, but they also raise new challenges for the account. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bhattacharya, Abhishek; Dunson, David B.
2012-01-01
This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
NASA Astrophysics Data System (ADS)
Pasyanos, Michael E.; Franz, Gregory A.; Ramirez, Abelardo L.
2006-03-01
In an effort to build seismic models that are the most consistent with multiple data sets we have applied a new probabilistic inverse technique. This method uses a Markov chain Monte Carlo (MCMC) algorithm to sample models from a prior distribution and test them against multiple data types to generate a posterior distribution. While computationally expensive, this approach has several advantages over deterministic models, notably the seamless reconciliation of different data types that constrain the model, the proper handling of both data and model uncertainties, and the ability to easily incorporate a variety of prior information, all in a straightforward, natural fashion. A real advantage of the technique is that it provides a more complete picture of the solution space. By mapping out the posterior probability density function, we can avoid simplistic assumptions about the model space and allow alternative solutions to be identified, compared, and ranked. Here we use this method to determine the crust and upper mantle structure of the Yellow Sea and Korean Peninsula region. The model is parameterized as a series of seven layers in a regular latitude-longitude grid, each of which is characterized by thickness and seismic parameters (Vp, Vs, and density). We use surface wave dispersion and body wave traveltime data to drive the model. We find that when properly tuned (i.e., the Markov chains have had adequate time to fully sample the model space and the inversion has converged), the technique behaves as expected. The posterior model reflects the prior information at the edge of the model where there is little or no data to constrain adjustments, but the range of acceptable models is significantly reduced in data-rich regions, producing values of sediment thickness, crustal thickness, and upper mantle velocities consistent with expectations based on knowledge of the regional tectonic setting.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
The Past Sure is Tense: On Interpreting Phylogenetic Divergence Time Estimates.
Brown, Joseph W; Smith, Stephen A
2018-03-01
Divergence time estimation-the calibration of a phylogeny to geological time-is an integral first step in modeling the tempo of biological evolution (traits and lineages). However, despite increasingly sophisticated methods to infer divergence times from molecular genetic sequences, the estimated age of many nodes across the tree of life contrast significantly and consistently with timeframes conveyed by the fossil record. This is perhaps best exemplified by crown angiosperms, where molecular clock (Triassic) estimates predate the oldest (Early Cretaceous) undisputed angiosperm fossils by tens of millions of years or more. While the incompleteness of the fossil record is a common concern, issues of data limitation and model inadequacy are viable (if underexplored) alternative explanations. In this vein, Beaulieu et al. (2015) convincingly demonstrated how methods of divergence time inference can be misled by both (i) extreme state-dependent molecular substitution rate heterogeneity and (ii) biased sampling of representative major lineages. These results demonstrate the impact of (potentially common) model violations. Here, we suggest another potential challenge: that the configuration of the statistical inference problem (i.e., the parameters, their relationships, and associated priors) alone may preclude the reconstruction of the paleontological timeframe for the crown age of angiosperms. We demonstrate, through sampling from the joint prior (formed by combining the tree (diversification) prior with the calibration densities specified for fossil-calibrated nodes) that with no data present at all, that an Early Cretaceous crown angiosperms is rejected (i.e., has essentially zero probability). More worrisome, however, is that for the 24 nodes calibrated by fossils, almost all have indistinguishable marginal prior and posterior age distributions when employing routine lognormal fossil calibration priors. These results indicate that there is inadequate information in the data to over-rule the joint prior. Given that these calibrated nodes are strategically placed in disparate regions of the tree, they act to anchor the tree scaffold, and so the posterior inference for the tree as a whole is largely determined by the pseudodata present in the (often arbitrary) calibration densities. We recommend, as for any Bayesian analysis, that marginal prior and posterior distributions be carefully compared to determine whether signal is coming from the data or prior belief, especially for parameters of direct interest. This recommendation is not novel. However, given how rarely such checks are carried out in evolutionary biology, it bears repeating. Our results demonstrate the fundamental importance of prior/posterior comparisons in any Bayesian analysis, and we hope that they further encourage both researchers and journals to consistently adopt this crucial step as standard practice. Finally, we note that the results presented here do not refute the biological modeling concerns identified by Beaulieu et al. (2015). Both sets of issues remain apposite to the goals of accurate divergence time estimation, and only by considering them in tandem can we move forward more confidently.
PULSAR SIGNAL DENOISING METHOD BASED ON LAPLACE DISTRIBUTION IN NO-SUBSAMPLING WAVELET PACKET DOMAIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang
2016-11-01
In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
Fracture of Reduced-Diameter Zirconia Dental Implants Following Repeated Insertion.
Karl, Matthias; Scherg, Stefan; Grobecker-Karl, Tanja
Achievement of high insertion torque values indicating good primary stability is a goal during dental implant placement. The objective of this study was to evaluate whether or not two-piece implants made from zirconia ceramic may be damaged as a result of torque application. A total of 10 two-piece zirconia implants were repeatedly inserted into polyurethane foam material with increasing density and decreasing osteotomy size. The insertion torque applied was measured, and implants were checked for fractures by applying the fluorescent penetrant method. Weibull probability of failure was calculated based on the recorded insertion torque values. Catastrophic failures could be seen in five of the implants from two different batches at insertion torques ranging from 46.0 to 70.5 Ncm, while the remaining implants (all belonging to one batch) survived. Weibull probability of failure seems to be low at the manufacturer-recommended maximum insertion torque of 35 Ncm. Chipping fractures at the thread tips as well as tool marks were the only otherwise observed irregularities. While high insertion torques may be desirable for immediate loading protocols, zirconia implants may fracture when manufacturer-recommended insertion torques are exceeded. Evaluating bone quality prior to implant insertion may be useful.
Estimating Distances from Parallaxes. III. Distances of Two Million Stars in the Gaia DR1 Catalogue
NASA Astrophysics Data System (ADS)
Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.
2016-12-01
We infer distances and their asymmetric uncertainties for two million stars using the parallaxes published in the Gaia DR1 (GDR1) catalogue. We do this with two distance priors: A minimalist, isotropic prior assuming an exponentially decreasing space density with increasing distance, and an anisotropic prior derived from the observability of stars in a Milky Way model. We validate our results by comparing our distance estimates for 105 Cepheids which have more precise, independently estimated distances. For this sample we find that the Milky Way prior performs better (the rms of the scaled residuals is 0.40) than the exponentially decreasing space density prior (rms is 0.57), although for distances beyond 2 kpc the Milky Way prior performs worse, with a bias in the scaled residuals of -0.36 (versus -0.07 for the exponentially decreasing space density prior). We do not attempt to include the photometric data in GDR1 due to the lack of reliable color information. Our distance catalog is available at http://www.mpia.de/homes/calj/tgas_distances/main.html as well as at CDS. This should only be used to give individual distances. Combining data or testing models should be done with the original parallaxes, and attention paid to correlated and systematic uncertainties.
Goldberg, Daphne Wrobel; Leitão, Santiago Alonso Tobar; Godfrey, Matthew H.; Lopez, Gustave Gilles; Santos, Armando José Barsante; Neves, Fabiana Alves; de Souza, Érica Patrícia Garcia; Moura, Anibal Sanchez; Bastos, Jayme da Cunha; Bastos, Vera Lúcia Freire da Cunha
2013-01-01
Female sea turtles have rarely been observed foraging during the nesting season. This suggests that prior to their migration to nesting beaches the females must store sufficient energy and nutrients at their foraging grounds and must be physiologically capable of undergoing months without feeding. Leptin (an appetite-suppressing protein) and ghrelin (a hunger-stimulating peptide) affect body weight by influencing energy intake in all vertebrates. We investigated the levels of these hormones and other physiological and nutritional parameters in nesting hawksbill sea turtles in Rio Grande do Norte State, Brazil, by collecting consecutive blood samples from 41 turtles during the 2010–2011 and 2011–2012 reproductive seasons. We found that levels of serum leptin decreased over the nesting season, which potentially relaxed suppression of food intake and stimulated females to begin foraging either during or after the post-nesting migration. Concurrently, we recorded an increasing trend in ghrelin, which may have stimulated food intake towards the end of the nesting season. Both findings are consistent with the prediction that post-nesting females will begin to forage, either during or immediately after their post-nesting migration. We observed no seasonal trend for other physiological parameters (values of packed cell volume and serum levels of alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, γ-glutamyl transferase, low-density lipoprotein, and high-density lipoprotein). The observed downward trends in general serum biochemistry levels were probably due to the physiological challenge of vitellogenesis and nesting in addition to limited energy resources and probable fasting. PMID:27293600
Predicting coastal cliff erosion using a Bayesian probabilistic model
Hapke, Cheryl J.; Plant, Nathaniel G.
2010-01-01
Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.
Goldberg, Daphne Wrobel; Leitão, Santiago Alonso Tobar; Godfrey, Matthew H; Lopez, Gustave Gilles; Santos, Armando José Barsante; Neves, Fabiana Alves; de Souza, Érica Patrícia Garcia; Moura, Anibal Sanchez; Bastos, Jayme da Cunha; Bastos, Vera Lúcia Freire da Cunha
2013-01-01
Female sea turtles have rarely been observed foraging during the nesting season. This suggests that prior to their migration to nesting beaches the females must store sufficient energy and nutrients at their foraging grounds and must be physiologically capable of undergoing months without feeding. Leptin (an appetite-suppressing protein) and ghrelin (a hunger-stimulating peptide) affect body weight by influencing energy intake in all vertebrates. We investigated the levels of these hormones and other physiological and nutritional parameters in nesting hawksbill sea turtles in Rio Grande do Norte State, Brazil, by collecting consecutive blood samples from 41 turtles during the 2010-2011 and 2011-2012 reproductive seasons. We found that levels of serum leptin decreased over the nesting season, which potentially relaxed suppression of food intake and stimulated females to begin foraging either during or after the post-nesting migration. Concurrently, we recorded an increasing trend in ghrelin, which may have stimulated food intake towards the end of the nesting season. Both findings are consistent with the prediction that post-nesting females will begin to forage, either during or immediately after their post-nesting migration. We observed no seasonal trend for other physiological parameters (values of packed cell volume and serum levels of alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, γ-glutamyl transferase, low-density lipoprotein, and high-density lipoprotein). The observed downward trends in general serum biochemistry levels were probably due to the physiological challenge of vitellogenesis and nesting in addition to limited energy resources and probable fasting.
NASA Astrophysics Data System (ADS)
Colombier, M.; Gurioli, L.; Druitt, T. H.; Shea, T.; Boivin, P.; Miallier, D.; Cluzel, N.
2017-02-01
Textural parameters such as density, porosity, pore connectivity, permeability, and vesicle size distributions of vesiculated and dense pyroclasts from the 9.4-ka eruption of Kilian Volcano, were quantified to constrain conduit and eruptive processes. The eruption generated a sequence of five vertical explosions of decreasing intensity, producing pyroclastic density currents and tephra fallout. The initial and final phases of the eruption correspond to the fragmentation of a degassed plug, as suggested by the increase of dense juvenile clasts (bimodal density distributions) as well as non-juvenile clasts, resulting from the reaming of a crater. In contrast, the intermediate eruptive phases were the results of more open-conduit conditions (unimodal density distributions, decreases in dense juvenile pyroclasts, and non-juvenile clasts). Vesicles within the pyroclasts are almost fully connected; however, there are a wide range of permeabilities, especially for the dense juvenile clasts. Textural analysis of the juvenile clasts reveals two vesiculation events: (1) an early nucleation event at low decompression rates during slow magma ascent producing a population of large bubbles (>1 mm) and (2) a syn-explosive nucleation event, followed by growth and coalescence of small bubbles controlled by high decompression rates immediately prior to or during explosive fragmentation. The similarities in pyroclast textures between the Kilian explosions and those at Soufrière Hills Volcano on Montserrat, in 1997, imply that eruptive processes in the two systems were rather similar and probably common to vulcanian eruptions in general.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
NASA Astrophysics Data System (ADS)
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1987-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would appear quasiperiodic, while at other times, the events can appear more Poissonian. Hence a given paleoseismic or instrumental record may not reflect the long-term seismicity of a fault, which has important implications for hazard assessment.
ERIC Educational Resources Information Center
Axelrod, Michael I.; Zank, Amber J.
2012-01-01
Noncompliance is one of the most problematic behaviors within the school setting. One strategy to increase compliance of noncompliant students is a high-probability command sequence (HPCS; i.e., a set of simple commands in which an individual is likely to comply immediately prior to the delivery of a command that has a lower probability of…
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Agapova, Maria; Devine, Emily B; Nguyen, Hiep; Wolf, Fredric M; Inoue, Lurdes Y T
2014-07-01
Assessing relative performance among competing interventions is an important part of comparative effectiveness research. Bayesian indirect comparisons add information to existing Cochrane reviews, such as which intervention is likely to perform best. However, heterogeneity variance priors may influence results and, potentially, clinical guidance. We highlight the features of Bayesian indirect comparisons using a case study of a Cochrane review update in asthma care. The probability that one self-management educational intervention outperforms others is estimated. Simulation studies investigate the effect of heterogeneity variance prior distributions. Results suggest a 55% probability that individual education is best, followed by combination (39%) and group (6%). The intervention with few trials was sensitive to prior distributions. Bayesian indirect comparisons updates of Cochrane reviews are valuable comparative effectiveness research tools.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior
Borrero, Carrie S.W; Borrero, John C
2008-01-01
We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281
Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness
NASA Astrophysics Data System (ADS)
Hardy, Tyler J.; Cain, Stephen C.
2016-05-01
The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.
A Bayesian Assessment of Seismic Semi-Periodicity Forecasts
NASA Astrophysics Data System (ADS)
Nava, F.; Quinteros, C.; Glowacka, E.; Frez, J.
2016-01-01
Among the schemes for earthquake forecasting, the search for semi-periodicity during large earthquakes in a given seismogenic region plays an important role. When considering earthquake forecasts based on semi-periodic sequence identification, the Bayesian formalism is a useful tool for: (1) assessing how well a given earthquake satisfies a previously made forecast; (2) re-evaluating the semi-periodic sequence probability; and (3) testing other prior estimations of the sequence probability. A comparison of Bayesian estimates with updated estimates of semi-periodic sequences that incorporate new data not used in the original estimates shows extremely good agreement, indicating that: (1) the probability that a semi-periodic sequence is not due to chance is an appropriate estimate for the prior sequence probability estimate; and (2) the Bayesian formalism does a very good job of estimating corrected semi-periodicity probabilities, using slightly less data than that used for updated estimates. The Bayesian approach is exemplified explicitly by its application to the Parkfield semi-periodic forecast, and results are given for its application to other forecasts in Japan and Venezuela.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Repeat migration and disappointment.
Grant, E K; Vanderkamp, J
1986-01-01
This article investigates the determinants of repeat migration among the 44 regions of Canada, using information from a large micro-database which spans the period 1968 to 1971. The explanation of repeat migration probabilities is a difficult task, and this attempt is only partly successful. May of the explanatory variables are not significant, and the overall explanatory power of the equations is not high. In the area of personal characteristics, the variables related to age, sex, and marital status are generally significant and with expected signs. The distance variable has a strongly positive effect on onward move probabilities. Variables related to prior migration experience have an important impact that differs between return and onward probabilities. In particular, the occurrence of prior moves has a striking effect on the probability of onward migration. The variable representing disappointment, or relative success of the initial move, plays a significant role in explaining repeat migration probabilities. The disappointment variable represents the ratio of actural versus expected wage income in the year after the initial move, and its effect on both repeat migration probabilities is always negative and almost always highly significant. The repeat probabilities diminish after a year's stay in the destination region, but disappointment in the most recent year still has a bearing on the delayed repeat probabilities. While the quantitative impact of the disappointment variable is not large, it is difficult to draw comparisons since similar estimates are not available elsewhere.
2014-01-01
Background This protocol concerns the assessment of cost-effectiveness of hospital health information technology (HIT) in four hospitals. Two of these hospitals are acquiring ePrescribing systems incorporating extensive decision support, while the other two will implement systems incorporating more basic clinical algorithms. Implementation of an ePrescribing system will have diffuse effects over myriad clinical processes, so the protocol has to deal with a large amount of information collected at various ‘levels’ across the system. Methods/Design The method we propose is use of Bayesian ideas as a philosophical guide. Assessment of cost-effectiveness requires a number of parameters in order to measure incremental cost utility or benefit – the effectiveness of the intervention in reducing frequency of preventable adverse events; utilities for these adverse events; costs of HIT systems; and cost consequences of adverse events averted. There is no single end-point that adequately and unproblematically captures the effectiveness of the intervention; we therefore plan to observe changes in error rates and adverse events in four error categories (death, permanent disability, moderate disability, minimal effect). For each category we will elicit and pool subjective probability densities from experts for reductions in adverse events, resulting from deployment of the intervention in a hospital with extensive decision support. The experts will have been briefed with quantitative and qualitative data from the study and external data sources prior to elicitation. Following this, there will be a process of deliberative dialogues so that experts can “re-calibrate” their subjective probability estimates. The consolidated densities assembled from the repeat elicitation exercise will then be used to populate a health economic model, along with salient utilities. The credible limits from these densities can define thresholds for sensitivity analyses. Discussion The protocol we present here was designed for evaluation of ePrescribing systems. However, the methodology we propose could be used whenever research cannot provide a direct and unbiased measure of comparative effectiveness. PMID:25038609
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Yao, W.; Polewski, P.; Krzystek, P.
2017-09-01
In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Bayesian Analysis of a Simple Measurement Model Distinguishing between Types of Information
NASA Astrophysics Data System (ADS)
Lira, Ignacio; Grientschnig, Dieter
2015-12-01
Let a quantity of interest, Y, be modeled in terms of a quantity X and a set of other quantities Z. Suppose that for Z there is type B information, by which we mean that it leads directly to a joint state-of-knowledge probability density function (PDF) for that set, without reference to likelihoods. Suppose also that for X there is type A information, which signifies that a likelihood is available. The posterior for X is then obtained by updating its prior with said likelihood by means of Bayes' rule, where the prior encodes whatever type B information there may be available for X. If there is no such information, an appropriate non-informative prior should be used. Once the PDFs for X and Z have been constructed, they can be propagated through the measurement model to obtain the PDF for Y, either analytically or numerically. But suppose that, at the same time, there is also information of type A, type B or both types together for the quantity Y. By processing such information in the manner described above we obtain another PDF for Y. Which one is right? Should both PDFs be merged somehow? Is there another way of applying Bayes' rule such that a single PDF for Y is obtained that encodes all existing information? In this paper we examine what we believe should be the proper ways of dealing with such a (not uncommon) situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaidos, Eric, E-mail: gaidos@hawaii.edu
A key goal of the Kepler mission is the discovery of Earth-size transiting planets in ''habitable zones'' where stellar irradiance maintains a temperate climate on an Earth-like planet. Robust estimates of planet radius and irradiance require accurate stellar parameters, but most Kepler systems are faint, making spectroscopy difficult and prioritization of targets desirable. The parameters of 2035 host stars were estimated by Bayesian analysis and the probabilities p{sub HZ} that 2738 candidate or confirmed planets orbit in the habitable zone were calculated. Dartmouth Stellar Evolution Program models were compared to photometry from the Kepler Input Catalog, priors for stellar mass,more » age, metallicity and distance, and planet transit duration. The analysis yielded probability density functions for calculating confidence intervals of planet radius and stellar irradiance, as well as p{sub HZ}. Sixty-two planets have p{sub HZ} > 0.5 and a most probable stellar irradiance within habitable zone limits. Fourteen of these have radii less than twice the Earth; the objects most resembling Earth in terms of radius and irradiance are KOIs 2626.01 and 3010.01, which orbit late K/M-type dwarf stars. The fraction of Kepler dwarf stars with Earth-size planets in the habitable zone ({eta}{sub Circled-Plus }) is 0.46, with a 95% confidence interval of 0.31-0.64. Parallaxes from the Gaia mission will reduce uncertainties by more than a factor of five and permit definitive assignments of transiting planets to the habitable zones of Kepler stars.« less
Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach
NASA Astrophysics Data System (ADS)
Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.
2015-04-01
Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.
VizieR Online Data Catalog: Wide binaries in Tycho-Gaia: search method (Andrews+, 2017)
NASA Astrophysics Data System (ADS)
Andrews, J. J.; Chaname, J.; Agueros, M. A.
2017-11-01
Our catalogue of wide binaries identified in the Tycho-Gaia Astrometric Solution catalogue. The Gaia source IDs, Tycho IDs, astrometry, posterior probabilities for both the log-flat prior and power-law prior models, and angular separation are presented. (1 data file).
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
NASA Astrophysics Data System (ADS)
Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.
2016-01-01
Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
Pruning a minimum spanning tree
NASA Astrophysics Data System (ADS)
Sandoval, Leonidas
2012-04-01
This work employs various techniques in order to filter random noise from the information provided by minimum spanning trees obtained from the correlation matrices of international stock market indices prior to and during times of crisis. The first technique establishes a threshold above which connections are considered affected by noise, based on the study of random networks with the same probability density distribution of the original data. The second technique is to judge the strength of a connection by its survival rate, which is the amount of time a connection between two stock market indices endures. The idea is that true connections will survive for longer periods of time, and that random connections will not. That information is then combined with the information obtained from the first technique in order to create a smaller network, in which most of the connections are either strong or enduring in time.
Performance of correlation receivers in the presence of impulse noise.
NASA Technical Reports Server (NTRS)
Moore, J. D.; Houts, R. C.
1972-01-01
An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Halperin, Daniel M.; Lee, J. Jack; Dagohoy, Cecile Gonzales; Yao, James C.
2015-01-01
Purpose Despite a robust clinical trial enterprise and encouraging phase II results, the vast minority of oncologic drugs in development receive regulatory approval. In addition, clinicians occasionally make therapeutic decisions based on phase II data. Therefore, clinicians, investigators, and regulatory agencies require improved understanding of the implications of positive phase II studies. We hypothesized that prior probability of eventual drug approval was significantly different across GI cancers, with substantial ramifications for the predictive value of phase II studies. Methods We conducted a systematic search of phase II studies conducted between 1999 and 2004 and compared studies against US Food and Drug Administration and National Cancer Institute databases of approved indications for drugs tested in those studies. Results In all, 317 phase II trials were identified and followed for a median of 12.5 years. Following completion of phase III studies, eventual new drug application approval rates varied from 0% (zero of 45) in pancreatic adenocarcinoma to 34.8% (24 of 69) for colon adenocarcinoma. The proportion of drugs eventually approved was correlated with the disease under study (P < .001). The median type I error for all published trials was 0.05, and the median type II error was 0.1, with minimal variation. By using the observed median type I error for each disease, phase II studies have positive predictive values ranging from less than 1% to 90%, depending on primary site of the cancer. Conclusion Phase II trials in different GI malignancies have distinct prior probabilities of drug approval, yielding quantitatively and qualitatively different predictive values with similar statistical designs. Incorporation of prior probability into trial design may allow for more effective design and interpretation of phase II studies. PMID:26261263
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
Advanced prior modeling for 3D bright field electron tomography
NASA Astrophysics Data System (ADS)
Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.
2015-03-01
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Inverse modeling of Asian (222)Rn flux using surface air (222)Rn concentration.
Hirao, Shigekazu; Yamazawa, Hiromi; Moriizumi, Jun
2010-11-01
When used with an atmospheric transport model, the (222)Rn flux distribution estimated in our previous study using soil transport theory caused underestimation of atmospheric (222)Rn concentrations as compared with measurements in East Asia. In this study, we applied a Bayesian synthesis inverse method to produce revised estimates of the annual (222)Rn flux density in Asia by using atmospheric (222)Rn concentrations measured at seven sites in East Asia. The Bayesian synthesis inverse method requires a prior estimate of the flux distribution and its uncertainties. The atmospheric transport model MM5/HIRAT and our previous estimate of the (222)Rn flux distribution as the prior value were used to generate new flux estimates for the eastern half of the Eurasian continent dividing into 10 regions. The (222)Rn flux densities estimated using the Bayesian inversion technique were generally higher than the prior flux densities. The area-weighted average (222)Rn flux density for Asia was estimated to be 33.0 mBq m(-2) s(-1), which is substantially higher than the prior value (16.7 mBq m(-2) s(-1)). The estimated (222)Rn flux densities decrease with increasing latitude as follows: Southeast Asia (36.7 mBq m(-2) s(-1)); East Asia (28.6 mBq m(-2) s(-1)) including China, Korean Peninsula and Japan; and Siberia (14.1 mBq m(-2) s(-1)). Increase of the newly estimated fluxes in Southeast Asia, China, Japan, and the southern part of Eastern Siberia from the prior ones contributed most significantly to improved agreement of the model-calculated concentrations with the atmospheric measurements. The sensitivity analysis of prior flux errors and effects of locally exhaled (222)Rn showed that the estimated fluxes in Northern and Central China, Korea, Japan, and the southern part of Eastern Siberia were robust, but that in Central Asia had a large uncertainty.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Fracture prediction and calibration of a Canadian FRAX® tool: a population-based report from CaMos
Fraser, L.-A.; Langsetmo, L.; Berger, C.; Ioannidis, G.; Goltzman, D.; Adachi, J. D.; Papaioannou, A.; Josse, R.; Kovacs, C. S.; Olszynski, W. P.; Towheed, T.; Hanley, D. A.; Kaiser, S. M.; Prior, J.; Jamal, S.; Kreiger, N.; Brown, J. P.; Johansson, H.; Oden, A.; McCloskey, E.; Kanis, J. A.
2016-01-01
Summary A new Canadian WHO fracture risk assessment (FRAX®) tool to predict 10-year fracture probability was compared with observed 10-year fracture outcomes in a large Canadian population-based study (CaMos). The Canadian FRAX tool showed good calibration and discrimination for both hip and major osteoporotic fractures. Introduction The purpose of this study was to validate a new Canadian WHO fracture risk assessment (FRAX®) tool in a prospective, population-based cohort, the Canadian Multi-centre Osteoporosis Study (CaMos). Methods A FRAX tool calibrated to the Canadian population was developed by the WHO Collaborating Centre for Metabolic Bone Diseases using national hip fracture and mortality data. Ten-year FRAX probabilities with and without bone mineral density (BMD) were derived for CaMos women (N=4,778) and men (N=1,919) and compared with observed fracture outcomes to 10 years (Kaplan–Meier method). Cox proportional hazard models were used to investigate the contribution of individual FRAX variables. Results Mean overall 10-year FRAX probability with BMD for major osteoporotic fractures was not significantly different from the observed value in men [predicted 5.4% vs. observed 6.4% (95%CI 5.2–7.5%)] and only slightly lower in women [predicted 10.8% vs. observed 12.0% (95%CI 11.0–12.9%)]. FRAX was well calibrated for hip fracture assessment in women [predicted 2.7% vs. observed 2.7% (95%CI 2.2–3.2%)] but underestimated risk in men [predicted 1.3% vs. observed 2.4% (95%CI 1.7–3.1%)]. FRAX with BMD showed better fracture discrimination than FRAX without BMD or BMD alone. Age, body mass index, prior fragility fracture and femoral neck BMD were significant independent predictors of major osteoporotic fractures; sex, age, prior fragility fracture and femoral neck BMD were significant independent predictors of hip fractures. Conclusion The Canadian FRAX tool provides predictions consistent with observed fracture rates in Canadian women and men, thereby providing a valuable tool for Canadian clinicians assessing patients at risk of fracture. PMID:21161508
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
Estimating extinction using unsupervised machine learning
NASA Astrophysics Data System (ADS)
Meingast, Stefan; Lombardi, Marco; Alves, João
2017-05-01
Dust extinction is the most robust tracer of the gas distribution in the interstellar medium, but measuring extinction is limited by the systematic uncertainties involved in estimating the intrinsic colors to background stars. In this paper we present a new technique, Pnicer, that estimates intrinsic colors and extinction for individual stars using unsupervised machine learning algorithms. This new method aims to be free from any priors with respect to the column density and intrinsic color distribution. It is applicable to any combination of parameters and works in arbitrary numbers of dimensions. Furthermore, it is not restricted to color space. Extinction toward single sources is determined by fitting Gaussian mixture models along the extinction vector to (extinction-free) control field observations. In this way it becomes possible to describe the extinction for observed sources with probability densities, rather than a single value. Pnicer effectively eliminates known biases found in similar methods and outperforms them in cases of deep observational data where the number of background galaxies is significant, or when a large number of parameters is used to break degeneracies in the intrinsic color distributions. This new method remains computationally competitive, making it possible to correctly de-redden millions of sources within a matter of seconds. With the ever-increasing number of large-scale high-sensitivity imaging surveys, Pnicer offers a fast and reliable way to efficiently calculate extinction for arbitrary parameter combinations without prior information on source characteristics. The Pnicer software package also offers access to the well-established Nicer technique in a simple unified interface and is capable of building extinction maps including the Nicest correction for cloud substructure. Pnicer is offered to the community as an open-source software solution and is entirely written in Python.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
NASA Technical Reports Server (NTRS)
Gault, D. E.; Burns, J. A.; Cassen, P.; Strom, R. G.
1977-01-01
Prior to the flight of the Mariner 10 spacecraft, Mercury was the least investigated and most poorly known terrestrial planet (Kuiper 1970, Devine 1972). Observational difficulties caused by its proximity to the Sun as viewed from Earth caused the planet to remain a small, vague disk exhibiting little surface contrast or details, an object for which only three major facts were known: 1. its bulk density is similar to that of Venus and Earth, much greater than that of Mars and the Moon; 2. its surface reflects electromagnetic radiation at all wavelengths in the same manner as the Moon (taking into account differences in their solar distances); and 3. its rotation period is in 2/3 resonance with its orbital period. Images obtained during the flyby by Mariner 10 on 29 March 1974 (and the two subsequent flybys on 21 September 1974 and 16 March 1975) revealed Mercury's surface in detail equivalent to that available for the Moon during the early 1960's from Earth-based telescopic views. Additionally, however, information was obtained on the planet's mass and size, atmospheric composition and density, charged-particle environment, and infrared thermal radiation from the surface, and most significantly of all, the existence of a planetary magnetic field that is probably intrinsic to Mercury was established. In the following, this new information is summarized together with results from theoretical studies and ground-based observations. In the quantum jumps of knowledge that have been characteristic of "space-age" exploration, the previously obscure body of Mercury has suddenly come into sharp focus. It is very likely a differentiated body, probably contains a large Earth-like iron-rich core, and displays a surface remarkably similar to that of the Moon, which suggests a similar evolutionary history.
Van der Fels-Klerx, Ine H J; Goossens, Louis H J; Saatkamp, Helmut W; Horst, Suzan H S
2002-02-01
This paper presents a protocol for a formal expert judgment process using a heterogeneous expert panel aimed at the quantification of continuous variables. The emphasis is on the process's requirements related to the nature of expertise within the panel, in particular the heterogeneity of both substantive and normative expertise. The process provides the opportunity for interaction among the experts so that they fully understand and agree upon the problem at hand, including qualitative aspects relevant to the variables of interest, prior to the actual quantification task. Individual experts' assessments on the variables of interest, cast in the form of subjective probability density functions, are elicited with a minimal demand for normative expertise. The individual experts' assessments are aggregated into a single probability density function per variable, thereby weighting the experts according to their expertise. Elicitation techniques proposed include the Delphi technique for the qualitative assessment task and the ELI method for the actual quantitative assessment task. Appropriately, the Classical model was used to weight the experts' assessments in order to construct a single distribution per variable. Applying this model, the experts' quality typically was based on their performance on seed variables. An application of the proposed protocol in the broad and multidisciplinary field of animal health is presented. Results of this expert judgment process showed that the proposed protocol in combination with the proposed elicitation and analysis techniques resulted in valid data on the (continuous) variables of interest. In conclusion, the proposed protocol for a formal expert judgment process aimed at the elicitation of quantitative data from a heterogeneous expert panel provided satisfactory results. Hence, this protocol might be useful for expert judgment studies in other broad and/or multidisciplinary fields of interest.
Wolf reintroduction to Scotland: public attitudes and consequences for red deer management.
Nilsen, Erlend B; Milner-Gulland, E J; Schofield, Lee; Mysterud, Atle; Stenseth, Nils Chr; Coulson, Tim
2007-04-07
Reintroductions are important tools for the conservation of individual species, but recently more attention has been paid to the restoration of ecosystem function, and to the importance of carrying out a full risk assessment prior to any reintroduction programme. In much of the Highlands of Scotland, wolves (Canis lupus) were eradicated by 1769, but there are currently proposals for them to be reintroduced. Their main wild prey if reintroduced would be red deer (Cervus elaphus). Red deer are themselves a contentious component of the Scottish landscape. They support a trophy hunting industry but are thought to be close to carrying capacity, and are believed to have a considerable economic and ecological impact. High deer densities hamper attempts to reforest, reduce bird densities and compete with livestock for grazing. Here, we examine the probable consequences for the red deer population of reintroducing wolves into the Scottish Highlands using a structured Markov predator-prey model. Our simulations suggest that reintroducing wolves is likely to generate conservation benefits by lowering deer densities. It would also free deer estates from the financial burden of costly hind culls, which are required in order to achieve the Deer Commission for Scotland's target deer densities. However, a reintroduced wolf population would also carry costs, particularly through increased livestock mortality. We investigated perceptions of the costs and benefits of wolf reintroductions among rural and urban communities in Scotland and found that the public are generally positive to the idea. Farmers hold more negative attitudes, but far less negative than the organizations that represent them.
Wolf reintroduction to Scotland: public attitudes and consequences for red deer management
Nilsen, Erlend B; Milner-Gulland, E.J; Schofield, Lee; Mysterud, Atle; Stenseth, Nils Chr; Coulson, Tim
2007-01-01
Reintroductions are important tools for the conservation of individual species, but recently more attention has been paid to the restoration of ecosystem function, and to the importance of carrying out a full risk assessment prior to any reintroduction programme. In much of the Highlands of Scotland, wolves (Canis lupus) were eradicated by 1769, but there are currently proposals for them to be reintroduced. Their main wild prey if reintroduced would be red deer (Cervus elaphus). Red deer are themselves a contentious component of the Scottish landscape. They support a trophy hunting industry but are thought to be close to carrying capacity, and are believed to have a considerable economic and ecological impact. High deer densities hamper attempts to reforest, reduce bird densities and compete with livestock for grazing. Here, we examine the probable consequences for the red deer population of reintroducing wolves into the Scottish Highlands using a structured Markov predator–prey model. Our simulations suggest that reintroducing wolves is likely to generate conservation benefits by lowering deer densities. It would also free deer estates from the financial burden of costly hind culls, which are required in order to achieve the Deer Commission for Scotland's target deer densities. However, a reintroduced wolf population would also carry costs, particularly through increased livestock mortality. We investigated perceptions of the costs and benefits of wolf reintroductions among rural and urban communities in Scotland and found that the public are generally positive to the idea. Farmers hold more negative attitudes, but far less negative than the organizations that represent them. PMID:17264063
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.
Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar
2015-01-01
Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
Calibrated tree priors for relaxed phylogenetics and divergence time estimation.
Heled, Joseph; Drummond, Alexei J
2012-01-01
The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation
Wilke, Marko; Altaye, Mekibib; Holland, Scott K.
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.
Wilke, Marko; Altaye, Mekibib; Holland, Scott K
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.
ERIC Educational Resources Information Center
Murphy, Amanda; Terrizzi, Marissa; Cormas, Peter
2012-01-01
"Probability is a difficult concept to teach, because children and adults find it counterintuitive." This is impetus to consider the detailed planning of a set of lessons with a "mixed", in many senses, group of fourth graders. Can the use of prior experience, and the knowledge associated with that experience, make probability a concept that is…
Robust Connectivity in Sensory and Ad Hoc Network
2011-02-01
as the prior probability is π0 = 0.8, the error probability should be capped at 0.2. This seemingly pathological result is due to the fact that the...publications and is the author of the book Multirate and Wavelet Signal Processing (Academic Press, 1998). His research interests include multiscale signal and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2012 CFR
2012-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2014 CFR
2014-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2013 CFR
2013-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2011 CFR
2011-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Matranga, Domenica; Firenze, Alberto; Vullo, Angela
2013-10-01
The aim of this study was to show the potential of Bayesian analysis in statistical modelling of dental caries data. Because of the bounded nature of the dmft (DMFT) index, zero-inflated binomial (ZIB) and beta-binomial (ZIBB) models were considered. The effects of incorporating prior information available about the parameters of models were also shown. The data set used in this study was the Belo Horizonte Caries Prevention (BELCAP) study (Böhning et al. (1999)), consisting of five variables collected among 797 Brazilian school children designed to evaluate four programmes for reducing caries. Only the eight primary molar teeth were considered in the data set. A data augmentation algorithm was used for estimation. Firstly, noninformative priors were used to express our lack of knowledge about the regression parameters. Secondly, prior information about the probability of being a structural zero dmft and the probability of being caries affected in the subpopulation of susceptible children was incorporated. With noninformative priors, the best fitting model was the ZIBB. Education (OR = 0.76, 95% CrI: 0.59, 0.99), all interventions (OR = 0.46, 95% CrI: 0.35, 0.62), rinsing (OR = 0.61, 95% CrI: 0.47, 0.80) and hygiene (OR = 0.65, 95% CrI: 0.49, 0.86) were demonstrated to be factors protecting children from being caries affected. Being male increased the probability of being caries diseased (OR = 1.19, 95% CrI: 1.01, 1.42). However, after incorporating informative priors, ZIB models' estimates were not influenced, while ZIBB models reduced deviance and confirmed the association with all interventions and rinsing only. In our application, Bayesian estimates showed a similar accuracy and precision than likelihood-based estimates, although they offered many computational advantages and the possibility of expressing all forms of uncertainty in terms of probability. The overdispersion parameter could expound why the introduction of prior information had significant effects on the parameters of the ZIBB model, while ZIB estimates remained unchanged. Finally, the best performance of ZIBB compared to the ZIB model was shown to catch overdispersion in data. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
NASA Astrophysics Data System (ADS)
Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.
2016-12-01
Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalog will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalog of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars—either in distance or in space density—give poor results: The root mean square fractional distance error, {f}{rms}, grows far in excess of 100% once the fractional parallax error, {f}{true}, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single parameter—the scale length— has been set to an appropriate value: {f}{rms} is roughly equal to {f}{true} for {f}{true}\\lt 0.4, yet does not increase further as {f}{true} increases up to to 1.0. The Milky Way prior performs well except toward the Galactic center, due to a mismatch with the (simulated) data. Such mismatches will be inevitable (and remain unknown) in real applications, and can produce large errors. We therefore suggest adopting the simpler exponentially decreasing space density prior, which is also less time-consuming to compute. Including Gaia photometry improves the distance estimation significantly for both the Milky Way and exponentially decreasing space density prior, yet doing so requires additional assumptions about the physical nature of stars.
Imprecise Probability Methods for Weapons UQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Vander Wiel, Scott Alan
Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Bayesian analysis of the astrobiological implications of life’s early emergence on Earth
Spiegel, David S.; Turner, Edwin L.
2012-01-01
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a Bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a Bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth’s history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of Bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe. PMID:22198766
Bayesian analysis of the astrobiological implications of life's early emergence on Earth.
Spiegel, David S; Turner, Edwin L
2012-01-10
Life arose on Earth sometime in the first few hundred million years after the young planet had cooled to the point that it could support water-based organisms on its surface. The early emergence of life on Earth has been taken as evidence that the probability of abiogenesis is high, if starting from young Earth-like conditions. We revisit this argument quantitatively in a bayesian statistical framework. By constructing a simple model of the probability of abiogenesis, we calculate a bayesian estimate of its posterior probability, given the data that life emerged fairly early in Earth's history and that, billions of years later, curious creatures noted this fact and considered its implications. We find that, given only this very limited empirical information, the choice of bayesian prior for the abiogenesis probability parameter has a dominant influence on the computed posterior probability. Although terrestrial life's early emergence provides evidence that life might be abundant in the universe if early-Earth-like conditions are common, the evidence is inconclusive and indeed is consistent with an arbitrarily low intrinsic probability of abiogenesis for plausible uninformative priors. Finding a single case of life arising independently of our lineage (on Earth, elsewhere in the solar system, or on an extrasolar planet) would provide much stronger evidence that abiogenesis is not extremely rare in the universe.
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Kurt Vedros; Robert Youngblood
This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication ismore » green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values, and three sensitivity cases in which the number of FOTP demands was reduced, along with the Birnbaum importance of the FOTP.« less
Hu, Zhiyong; Liebens, Johan; Rao, K Ranga
2008-01-01
Background Relatively few studies have examined the association between air pollution and stroke mortality. Inconsistent and inclusive results from existing studies on air pollution and stroke justify the need to continue to investigate the linkage between stroke and air pollution. No studies have been done to investigate the association between stroke and greenness. The objective of this study was to examine if there is association of stroke with air pollution, income and greenness in northwest Florida. Results Our study used an ecological geographical approach and dasymetric mapping technique. We adopted a Bayesian hierarchical model with a convolution prior considering five census tract specific covariates. A 95% credible set which defines an interval having a 0.95 posterior probability of containing the parameter for each covariate was calculated from Markov Chain Monte Carlo simulations. The 95% credible sets are (-0.286, -0.097) for household income, (0.034, 0.144) for traffic air pollution effect, (0.419, 1.495) for emission density of monitored point source polluters, (0.413, 1.522) for simple point density of point source polluters without emission data, and (-0.289,-0.031) for greenness. Household income and greenness show negative effects (the posterior densities primarily cover negative values). Air pollution covariates have positive effects (the 95% credible sets cover positive values). Conclusion High risk of stroke mortality was found in areas with low income level, high air pollution level, and low level of exposure to green space. PMID:18452609
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
XID+: Next generation XID development
NASA Astrophysics Data System (ADS)
Hurley, Peter
2017-04-01
XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
Laboratory-Tutorial Activities for Teaching Probability
ERIC Educational Resources Information Center
Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.
2006-01-01
We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…
Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA
Yarra, Allyson N.; Magoulick, Daniel D.
2018-01-01
Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Committor of elementary reactions on multistate systems
NASA Astrophysics Data System (ADS)
Király, Péter; Kiss, Dóra Judit; Tóth, Gergely
2018-04-01
In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
Complexity for Survival of Living Systems
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A logical connection between the survivability of living systems and the complexity of their behavior (equivalently, mental complexity) has been established. This connection is an important intermediate result of continuing research on mathematical models that could constitute a unified representation of the evolution of both living and non-living systems. Earlier results of this research were reported in several prior NASA Tech Briefs articles, the two most relevant being Characteristics of Dynamics of Intelligent Systems (NPO- 21037), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48; and Self-Supervised Dynamical Systems (NPO- 30634) NASA Tech Briefs, Vol. 27, No. 3 (March 2003), page 72. As used here, living systems is synonymous with active systems and intelligent systems. The quoted terms can signify artificial agents (e.g., suitably programmed computers) or natural biological systems ranging from single-cell organisms at one extreme to the whole of human society at the other extreme. One of the requirements that must be satisfied in mathematical modeling of living systems is reconciliation of evolution of life with the second law of thermodynamics. In the approach followed in this research, this reconciliation is effected by means of a model, inspired partly by quantum mechanics, in which the quantum potential is replaced with an information potential. The model captures the most fundamental property of life - the ability to evolve from disorder to order without any external interference. The model incorporates the equations of classical dynamics, including Newton s equations of motion and equations for random components caused by uncertainties in initial conditions and by Langevin forces. The equations of classical dynamics are coupled with corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces that are gradients of the information potential, which, in turn, is a function of the probability densities. The probability densities are associated with mental images both self-image and nonself images (images of external objects that can include other agents). The evolution of the probability densities represents mental dynamics. Then the interaction between the physical and metal aspects of behavior is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. The interaction of a system with its self and nonself images affords unlimited capacity for increase of complexity. There is a biological basis for this model of mental dynamics in the discovery of mirror neurons that learn by imitation. The levels of complexity attained by use of this model match those observed in living systems. To establish a mechanism for increasing the complexity of dynamics of an active system, the model enables exploitation of a chain of reflections exemplified by questions of the form, "What do you think that I think that you think...?" Mathematically, each level of reflection is represented in the form of an attractor performing the corresponding level of abstraction with more details removed from higher levels. The model can be used to describe the behaviors, not only of biological systems, but also of ecological, social, and economics ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
High-Speed Photography of Detonation Propagation in Dynamically Precompressed Liquid Explosives
NASA Astrophysics Data System (ADS)
Petel, O. E.; Higgins, A. J.; Yoshinaka, A. C.; Zhang, F.
2007-12-01
The propagation of detonation in shock-compressed nitromethane was observed with a high-speed framing camera. The test explosive, nitromethane, was compressed by a reverberating shock wave to pressures as high as 10 GPa prior to being detonated by a secondary detonation event. The pressure and density in the test explosive prior to detonation were determined using two methods: manganin stress gauge measurements and LS-DYNA simulations. The velocity of the detonation front was determined from consecutive frames and correlated to the density of the reverberating shock-compressed explosive prior to detonation. Observing detonation propagation under these non-ambient conditions provides data which can be useful in the validation of equation of state models.
Looping probabilities of elastic chains: a path integral approach.
Cotta-Ramusino, Ludovica; Maddocks, John H
2010-11-01
We consider an elastic chain at thermodynamic equilibrium with a heat bath, and derive an approximation to the probability density function, or pdf, governing the relative location and orientation of the two ends of the chain. Our motivation is to exploit continuum mechanics models for the computation of DNA looping probabilities, but here we focus on explaining the novel analytical aspects in the derivation of our approximation formula. Accordingly, and for simplicity, the current presentation is limited to the illustrative case of planar configurations. A path integral formalism is adopted, and, in the standard way, the first approximation to the looping pdf is obtained from a minimal energy configuration satisfying prescribed end conditions. Then we compute an additional factor in the pdf which encompasses the contributions of quadratic fluctuations about the minimum energy configuration along with a simultaneous evaluation of the partition function. The original aspects of our analysis are twofold. First, the quadratic Lagrangian describing the fluctuations has cross-terms that are linear in first derivatives. This, seemingly small, deviation from the structure of standard path integral examples complicates the necessary analysis significantly. Nevertheless, after a nonlinear change of variable of Riccati type, we show that the correction factor to the pdf can still be evaluated in terms of the solution to an initial value problem for the linear system of Jacobi ordinary differential equations associated with the second variation. The second novel aspect of our analysis is that we show that the Hamiltonian form of these linear Jacobi equations still provides the appropriate correction term in the inextensible, unshearable limit that is commonly adopted in polymer physics models of, e.g. DNA. Prior analyses of the inextensible case have had to introduce nonlinear and nonlocal integral constraints to express conditions on the relative displacement of the end points. Our approximation formula for the looping pdf is of quite general applicability as, in contrast to most prior approaches, no assumption is made of either uniformity of the elastic chain, nor of a straight intrinsic shape. If the chain is uniform the Jacobi system evaluated at certain minimum energy configurations has constant coefficients. In such cases our approximate pdf can be evaluated in an entirely explicit, closed form. We illustrate our analysis with a planar example of this type and compute an approximate probability of cyclization, i.e., of forming a closed loop, from a uniform elastic chain whose intrinsic shape is an open circular arc.
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Radiative transition of hydrogen-like ions in quantum plasma
NASA Astrophysics Data System (ADS)
Hu, Hongwei; Chen, Zhanbin; Chen, Wencong
2016-12-01
At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Biver, E; Durosier, C; Chevalley, T; Herrmann, F R; Ferrari, S; Rizzoli, R
2015-08-01
In a cross-sectional analysis in postmenopausal women, prior ankle fractures were associated with lower areal bone mineral density (BMD) and trabecular bone alterations compared to no fracture history. Compared to women with forearm fractures, microstructure alterations were of lower magnitude. These data suggest that ankle fractures are another manifestation of bone fragility. Whether ankle fractures represent fragility fractures associated with low areal bone mineral density (aBMD) and volumetric bone mineral density (vBMD) and/or bone microstructure alterations remains unclear, in contrast to the well-recognised association between forearm fractures and osteoporosis. The objective of this study was to investigate aBMD, vBMD and bone microstructure in postmenopausal women with prior ankle fracture in adulthood, compared with women without prior fracture or with women with prior forearm fractures, considered as typically of osteoporotic origin. In a cross-sectional analysis in the Geneva Retirees Cohort study, 63 women with ankle fracture and 59 with forearm fracture were compared to 433 women without fracture (mean age, 65 ± 1 years). aBMD was measured by dual-energy X-ray absorptiometry; distal radius and tibia vBMD and bone microstructure were measured by high-resolution peripheral quantitative computed tomography. Compared with women without fracture, those with ankle fractures had lower aBMD, radius vBMD (-7.9%), trabecular density (-10.7%), number (-7.3%) and thickness (-4.6%) and higher trabecular spacing (+14.5%) (P < 0.05 for all). Tibia trabecular variables were also altered. For 1 standard deviation decrease in total hip aBMD or radius trabecular density, odds ratios for ankle fractures were 2.2 and 1.6, respectively, vs 2.2 and 2.7 for forearm fracture, respectively (P ≤ 0.001 for all). Compared to women with forearm fractures, those with ankle fractures had similar spine and hip aBMD, but microstructure alterations of lower magnitude. Women with ankle fractures have lower aBMD and vBMD and trabecular bone alterations, suggesting that ankle fractures are another manifestation of bone fragility.
Topics in Bayesian Hierarchical Modeling and its Monte Carlo Computations
NASA Astrophysics Data System (ADS)
Tak, Hyung Suk
The first chapter addresses a Beta-Binomial-Logit model that is a Beta-Binomial conjugate hierarchical model with covariate information incorporated via a logistic regression. Various researchers in the literature have unknowingly used improper posterior distributions or have given incorrect statements about posterior propriety because checking posterior propriety can be challenging due to the complicated functional form of a Beta-Binomial-Logit model. We derive data-dependent necessary and sufficient conditions for posterior propriety within a class of hyper-prior distributions that encompass those used in previous studies. Frequency coverage properties of several hyper-prior distributions are also investigated to see when and whether Bayesian interval estimates of random effects meet their nominal confidence levels. The second chapter deals with a time delay estimation problem in astrophysics. When the gravitational field of an intervening galaxy between a quasar and the Earth is strong enough to split light into two or more images, the time delay is defined as the difference between their travel times. The time delay can be used to constrain cosmological parameters and can be inferred from the time series of brightness data of each image. To estimate the time delay, we construct a Gaussian hierarchical model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian approach jointly infers model parameters via a Gibbs sampler. We also introduce a profile likelihood of the time delay as an approximation of its marginal posterior distribution. The last chapter specifies a repelling-attracting Metropolis algorithm, a new Markov chain Monte Carlo method to explore multi-modal distributions in a simple and fast manner. This algorithm is essentially a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes repelling, followed by an uphill move in density that aims to make local modes attracting. The downhill move is achieved via a reciprocal Metropolis ratio so that the algorithm prefers downward movement. The uphill move does the opposite using the standard Metropolis ratio which prefers upward movement. This down-up movement in density increases the probability of a proposed move to a different mode.
Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria
2010-01-01
Background Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Methods Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. Results A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10), and an empirical patient category were the co-variates that were found to improve the model. Conclusions A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12), and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up. PMID:20187935
Nonlinear mixed effects modeling of gametocyte carriage in patients with uncomplicated malaria.
Distiller, Greg B; Little, Francesca; Barnes, Karen I
2010-02-26
Gametocytes are the sexual form of the malaria parasite and the main agents of transmission. While there are several factors that influence host infectivity, the density of gametocytes appears to be the best single measure that is related to the human host's infectivity to mosquitoes. Despite the obviously important role that gametocytes play in the transmission of malaria and spread of anti-malarial resistance, it is common to estimate gametocyte carriage indirectly based on asexual parasite measurements. The objective of this research was to directly model observed gametocyte densities over time, during the primary infection. Of 447 patients enrolled in sulphadoxine-pyrimethamine therapeutic efficacy studies in South Africa and Mozambique, a subset of 103 patients who had no gametocytes pre-treatment and who had at least three non-zero gametocyte densities over the 42-day follow up period were included in this analysis. A variety of different functions were examined. A modified version of the critical exponential function was selected for the final model given its robustness across different datasets and its flexibility in assuming a variety of different shapes. Age, site, initial asexual parasite density (logged to the base 10), and an empirical patient category were the co-variates that were found to improve the model. A population nonlinear modeling approach seems promising and produced a flexible function whose estimates were stable across various different datasets. Surprisingly, dihydrofolate reductase and dihydropteroate synthetase mutation prevalence did not enter the model. This is probably related to a lack of power (quintuple mutations n = 12), and informative censoring; treatment failures were withdrawn from the study and given rescue treatment, usually prior to completion of follow up.
Epidemics in interconnected small-world networks.
Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong
2015-01-01
Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Horn, J. E.; Harter, T.
2011-06-01
Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Dexter, Franklin; De Oliveira, Gildasio S; McCarthy, Robert J
2016-01-15
We surveyed anesthesiology residents to evaluate the predictive effect of prior residence on desired location for future practice opportunities. One thousand five hundred United States anesthesiology residents were invited to participate. One question asked whether they intend to enter academic practice when they graduate from their residency/fellowship training. The analysis categorized the responses into "surely yes" and "probably" versus "even," "probably not," and "surely no." "After finishing your residency/fellowship training, are you planning to look seriously (e.g., interview) at jobs located more than a 2-hour drive from a location where you or your family (e.g., spouse or partner/significant other) have lived previously?" Responses were categorized into "very probably" and "somewhat probably" versus "somewhat improbably" and "not probable." Other questions explored predictors of the relationships quantified using the area under the receiver operating characteristic curve (area under the curve) ± its standard error. Among the 696 respondents, 36.9% (N = 256) would "probably" consider an academic practice. Fewer than half of those (P < 0.0001) would "very probably" consider a distant location (31.6%, 99% CI 24.4%-39.6%). Respondents with prior formal research training (e.g., PhD or Master's) had greater interest in academic practice at a distant location (AUC 0.63 ± 0.03, P = 0.0002). Except among respondents with formal research training, a good question to ask a job applicant is whether the applicant or the applicant's family has previously lived in the area.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Prior probability and feature predictability interactively bias perceptual decisions
Dunovan, Kyle E.; Tremel, Joshua J.; Wheeler, Mark E.
2014-01-01
Anticipating a forthcoming sensory experience facilitates perception for expected stimuli but also hinders perception for less likely alternatives. Recent neuroimaging studies suggest that expectation biases arise from feature-level predictions that enhance early sensory representations and facilitate evidence accumulation for contextually probable stimuli while suppressing alternatives. Reasonably then, the extent to which prior knowledge biases subsequent sensory processing should depend on the precision of expectations at the feature level as well as the degree to which expected features match those of an observed stimulus. In the present study we investigated how these two sources of uncertainty modulated pre- and post-stimulus bias mechanisms in the drift-diffusion model during a probabilistic face/house discrimination task. We tested several plausible models of choice bias, concluding that predictive cues led to a bias in both the starting-point and rate of evidence accumulation favoring the more probable stimulus category. We further tested the hypotheses that prior bias in the starting-point was conditional on the feature-level uncertainty of category expectations and that dynamic bias in the drift-rate was modulated by the match between expected and observed stimulus features. Starting-point estimates suggested that subjects formed a constant prior bias in favor of the face category, which exhibits less feature-level variability, that was strengthened or weakened by trial-wise predictive cues. Furthermore, we found that the gain on face/house evidence was increased for stimuli with less ambiguous features and that this relationship was enhanced by valid category expectations. These findings offer new evidence that bridges psychological models of decision-making with recent predictive coding theories of perception. PMID:24978303
Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation
Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.
1998-01-01
We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.
Syndrome Diagnosis: Human Intuition or Machine Intelligence?
Braaten, Øivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a ‘vector method’ and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes’ calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods. PMID:19415142
Bayen, Ute J.; Kuhlmann, Beatrice G.
2010-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing. PMID:21603251
Syndrome diagnosis: human intuition or machine intelligence?
Braaten, Oivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a 'vector method' and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes' calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Fogelholm, Mikael; Anderssen, Sigmund; Gunnarsdottir, Ingibjörg; Lahti-Koski, Marjaana
2012-01-01
This systematic literature review examined the role of dietary macronutrient composition, food consumption and dietary patterns in predicting weight or waist circumference (WC) change, with and without prior weight reduction. The literature search covered year 2000 and onwards. Prospective cohort studies, case–control studies and interventions were included. The studies had adult (18–70 y), mostly Caucasian participants. Out of a total of 1,517 abstracts, 119 full papers were identified as potentially relevant. After a careful scrutiny, 50 papers were quality graded as A (highest), B or C. Forty-three papers with grading A or B were included in evidence grading, which was done separately for all exposure-outcome combinations. The grade of evidence was classified as convincing, probable, suggestive or no conclusion. We found probable evidence for high intake of dietary fibre and nuts predicting less weight gain, and for high intake of meat in predicting more weight gain. Suggestive evidence was found for a protective role against increasing weight from whole grains, cereal fibre, high-fat dairy products and high scores in an index describing a prudent dietary pattern. Likewise, there was suggestive evidence for both fibre and fruit intake in protection against larger increases in WC. Also suggestive evidence was found for high intake of refined grains, and sweets and desserts in predicting more weight gain, and for refined (white) bread and high energy density in predicting larger increases in WC. The results suggested that the proportion of macronutrients in the diet was not important in predicting changes in weight or WC. In contrast, plenty of fibre-rich foods and dairy products, and less refined grains, meat and sugar-rich foods and drinks were associated with less weight gain in prospective cohort studies. The results on the role of dietary macronutrient composition in prevention of weight regain (after prior weight loss) were inconclusive. PMID:22893781
Zaoutis, Theoklis E; Prasad, Priya A; Localio, A Russell; Coffin, Susan E; Bell, Louis M; Walsh, Thomas J; Gross, Robert
2010-09-01
Candida species are the leading cause of invasive fungal infections in hospitalized children and are the third most common isolates recovered from patients with healthcare-associated bloodstream infection in the United States. Few data exist on risk factors for candidemia in pediatric intensive care unit (PICU) patients. We conducted a population-based case-control study of PICU patients at Children's Hospital of Philadelphia during the period from 1997 through 2004. Case patients were identified using laboratory records, and control patients were selected from PICU rosters. Control patients were matched to case patients by incidence density sampling, adjusting for time at risk. Following conditional multivariate analysis, we performed weighted multivariate analysis to determine predicted probabilities for candidemia given certain risk factor combinations. We identified 101 case patients with candidemia (incidence, 3.5 cases per 1000 PICU admissions). Factors independently associated with candidemia included presence of a central venous catheter (odds ratio [OR], 30.4; 95% confidence interval [CI], 7.7-119.5), malignancy (OR, 4.0; 95% CI, 1.23-13.1), use of vancomycin for >3 days in the prior 2 weeks (OR, 6.2; 95% CI, 2.4-16), and receipt of agents with activity against anaerobic organisms for >3 days in the prior 2 weeks (OR, 3.5; 95% CI, 1.5-8.4). Predicted probability of having various combinations of the aforementioned factors ranged from 10.7% to 46%. The 30-day mortality rate was 44% among case patients and 14% among control patients (OR, 4.22; 95% CI, 2.35-7.60). To our knowledge, this is the first study to evaluate independent risk factors and to determine a population of children in PICUs at high risk for developing candidemia. Future efforts should focus on validation of these risk factors identified in a different PICU population and development of interventions for prevention of candidemia in critically ill children.
Building Time-Dependent Earthquake Recurrence Models for Probabilistic Loss Computations
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Nyst, M.
2013-12-01
We present a Risk Management perspective on earthquake recurrence on mature faults, and the ways that it can be modeled. The specificities of Risk Management relative to Probabilistic Seismic Hazard Assessment (PSHA), include the non-linearity of the exceedance probability curve for losses relative to the frequency of event occurrence, the fact that losses at all return periods are needed (and not at discrete values of the return period), and the set-up of financial models which sometimes require the modeling of realizations of the order in which events may occur (I.e., simulated event dates are important, whereas only average rates of occurrence are routinely used in PSHA). We use New Zealand as a case study and review the physical characteristics of several faulting environments, contrasting them against properties of three probability density functions (PDFs) widely used to characterize the inter-event time distributions in time-dependent recurrence models. We review the data available to help constrain both the priors and the recurrence process. And we propose that with the current level of knowledge, the best way to quantify the recurrence of large events on mature faults is to use a Bayesian combination of models, i.e., the decomposition of the inter-event time distribution into a linear combination of individual PDFs with their weight given by the posterior distribution. Finally we propose to the community : 1. A general debate on how best to incorporate our knowledge (e.g., from geology, geomorphology) on plausible models and model parameters, but also preserve the information on what we do not know; and 2. The creation and maintenance of a global database of priors, data, and model evidence, classified by tectonic region, special fluid characteristic (pH, compressibility, pressure), fault geometry, and other relevant properties so that we can monitor whether some trends emerge in terms of which model dominates in which conditions.
Rubin, Stephen P.; Miller, Ian M.; Elder, Nancy; Reisenbichler, Reginald R.; Duda, Jeffrey J.; Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.
2011-01-01
(3–18 m) near the mouth of the Elwha River, between the west end of Freshwater Bay and the base of Ediz Hook, were surveyed in August and September 2008, to establish baselines prior to dam removal. Density was estimated for 9 kelp taxa, 65 taxa of invertebrates larger than 2.5 cm any dimension and 24 fish taxa. Density averaged over all sites was 3.1 per square meter (/m2) for kelp, 2.7/m2 for invertebrates, and 0.1/m2 for fish. Community structure was partly controlled by substrate type, seafloor relief, and depth. On average, 12 more taxa occurred where boulders were present compared to areas lacking boulders but with similar base substrate. Four habitat types were identified: (1) Bedrock/boulder reefs had the highest kelp density and taxa richness, and were characterized by a canopy of Nereocystis leutkeana (bull kelp) at the water surface and a secondary canopy of perennial kelp 1–2 m above the seafloor; (2) Mixed sand and gravel-cobble habitats with moderate relief provided by boulders had the highest density of invertebrates and a taxa richness nearly equivalent to that for bedrock/boulder reefs; (3) Mixed sand and gravel-cobble habitats lacking boulders supported a moderate density of kelp, primarily annual species with low growth forms (blades close to the seafloor), and the lowest invertebrate density among habitats; and (4) Sand habitats had the lowest kelp density and taxa richness among habitats and a moderate density of invertebrates. Uncertainties about nearshore community responses to increases in deposited and suspended sediments highlight the opportunity to advance scientific understanding by measuring responses following dam removal.
ERIC Educational Resources Information Center
Wiggins, Lyna; Nower, Lia; Mayers, Raymond Sanchez; Peterson, N. Andrew
2010-01-01
This study examines the density of lottery outlets within ethnically concentrated neighborhoods in Middlesex County, New Jersey, using geospatial statistical analyses. No prior studies have empirically examined the relationship between lottery outlet density and population demographics. Results indicate that lottery outlets were not randomly…
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Ballistic Performance Model of Crater Formation in Monolithic, Porous Thermal Protection Systems
NASA Technical Reports Server (NTRS)
Miller, J. E.; Christiansen, E. L.; Deighton, K. D.
2014-01-01
Porous monolithic ablative systems insulate atmospheric reentry vehicles from reentry plasmas generated by atmospheric braking from orbital and exo-orbital velocities. Due to the necessity that these materials create a temperature gradient up to several thousand Kelvin over their thickness, it is important that these materials are near their pristine state prior to reentry. These materials may also be on exposed surfaces to space environment threats like orbital debris and meteoroids leaving a probability that these exposed surfaces will be below their prescribed values. Owing to the typical small size of impact craters in these materials, the local flow fields over these craters and the ablative process afford some margin in thermal protection designs for these locally reduced performance values. In this work, tests to develop ballistic performance models for thermal protection materials typical of those being used on Orion are discussed. A density profile as a function of depth of a typical monolithic ablator and substructure system is shown in Figure 1a.
Mayo, Christie; Shelley, Courtney; MacLachlan, N. James; Gardner, Ian; Hartley, David; Barker, Christopher
2016-01-01
The global distribution of bluetongue virus (BTV) has been changing recently, perhaps as a result of climate change. To evaluate the risk of BTV infection and transmission in a BTV-endemic region of California, sentinel dairy cows were evaluated for BTV infection, and populations of Culicoides vectors were collected at different sites using carbon dioxide. A deterministic model was developed to quantify risk and guide future mitigation strategies to reduce BTV infection in California dairy cattle. The greatest risk of BTV transmission was predicted within the warm Central Valley of California that contains the highest density of dairy cattle in the United States. Temperature and parameters associated with Culicoides vectors (transmission probabilities, carrying capacity, and survivorship) had the greatest effect on BTV’s basic reproduction number, R0. Based on these analyses, optimal control strategies for reducing BTV infection risk in dairy cattle will be highly reliant upon early efforts to reduce vector abundance during the months prior to peak transmission. PMID:27812161
Communal nursing in wild house mice is not a by-product of group living: Females choose
NASA Astrophysics Data System (ADS)
Weidt, Andrea; Lindholm, Anna K.; König, Barbara
2014-01-01
Communal nursing, the provision of milk to non-offspring, has been argued to be a non-adaptive by-product of group living. We used 2 years of field data from a wild house mouse population to investigate this question. Communal nursing never occurred among females that previously lacked overlap in nest box use. Females nursed communally in only 33 % of cases in which there was a communal nursing partner available from the same social group. Solitarily nursing females were not socially isolated in their group; nevertheless, high spatial associations prior to reproduction predict which potential female partner was chosen for communal nursing. An increase in partner availability increased the probability of communal nursing, but population density itself had a negative effect, which may reflect increased female reproductive competition during summer. These results argue that females are selective in their choice of nursing partners and provide further support that communal nursing with the right partner is adaptive.
NASA Astrophysics Data System (ADS)
Jeffs, Brian D.; Christou, Julian C.
1998-09-01
This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
2017-08-19
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1997-04-01
To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation
Lambon Ralph, Matthew A.; Kempkes, Marleen; Cousins, James N.; Lewis, Penelope A.
2016-01-01
Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. SIGNIFICANCE STATEMENT Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. PMID:27030764
High-Speed Photography of Detonation Propagation in Dynamically Precompressed Liquid Explosives
NASA Astrophysics Data System (ADS)
Petel, Oren; Higgins, Andrew; Yoshinaka, Akio; Zhang, Fan
2007-06-01
The propagation of detonation in shock compressed nitromethane was observed with a high speed framing camera. The test explosive, nitromethane, was compressed by a reverberating shock wave to pressures on the order of 10 GPa prior to being detonated by a secondary detonation event. The pressure and density in the test explosive prior to detonation was determined using two methods: manganin strain gauge measurements and LS-DYNA simulations. The velocity of the detonation front was determined from consecutive frames and correlated to the density of the explosive post-reverberating shock wave and prior to being detonated. Observing detonation propagation under these non-ambient conditions provides data which can be useful in the validation of equation of state models.
Jue, Joshua S; Barboza, Marcelo Panizzutti; Prakash, Nachiketh S; Venkatramani, Vivek; Sinha, Varsha R; Pavan, Nicola; Nahar, Bruno; Kanabur, Pratik; Ahdoot, Michael; Dong, Yan; Satyanarayana, Ramgopal; Parekh, Dipen J; Punnen, Sanoj
2017-07-01
To compare the predictive accuracy of prostate-specific antigen (PSA) density vs PSA across different PSA ranges and by prior biopsy status in a prospective cohort undergoing prostate biopsy. Men from a prospective trial underwent an extended template biopsy to evaluate for prostate cancer at 26 sites throughout the United States. The area under the receiver operating curve assessed the predictive accuracy of PSA density vs PSA across 3 PSA ranges (<4 ng/mL, 4-10 ng/mL, >10 ng/mL). We also investigated the effect of varying the PSA density cutoffs on the detection of cancer and assessed the performance of PSA density vs PSA in men with or without a prior negative biopsy. Among 1290 patients, 585 (45%) and 284 (22%) men had prostate cancer and significant prostate cancer, respectively. PSA density performed better than PSA in detecting any prostate cancer within a PSA of 4-10 ng/mL (area under the receiver operating characteristic curve [AUC]: 0.70 vs 0.53, P < .0001) and within a PSA >10 mg/mL (AUC: 0.84 vs 0.65, P < .0001). PSA density was significantly more predictive than PSA in detecting any prostate cancer in men without (AUC: 0.73 vs 0.67, P < .0001) and with (AUC: 0.69 vs 0.55, P < .0001) a previous biopsy; however, the incremental difference in AUC was higher among men with a previous negative biopsy. Similar inferences were seen for significant cancer across all analyses. As PSA increases, PSA density becomes a better marker for predicting prostate cancer compared with PSA alone. Additionally, PSA density performed better than PSA in men with a prior negative biopsy. Copyright © 2017 Elsevier Inc. All rights reserved.
The role of demographic compensation theory in incidental take assessments for endangered species
McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts
2011-01-01
Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.
Dias, José M B; Leitao, José M N
2002-01-01
This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Information Density and Syntactic Repetition.
Temperley, David; Gildea, Daniel
2015-11-01
In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
Mitigation of acidified salmon rivers - effects of liming on young brown trout Salmo trutta.
Hesthagen, T; Larsen, B M; Bolstad, G; Fiske, P; Jonsson, B
2017-11-01
In southern Norway, 22 acidified rivers supporting anadromous salmonids were mitigated with lime to improve water quality and restore fish populations. In 13 of these rivers, effects on Salmo trutta and Salmo salar densities were monitored over 10-12 years, grouped into age 0 and age ≥ 1 year fish. These rivers had a mean annual discharge of between 4·9 and 85·5 m 3 s -1 , and six of them were regulated for hydro-power production. Salmo salar were lost in six of these rivers prior to liming, and highly reduced in the remaining seven rivers. Post-liming, S. salar became re-established in all six rivers with lost populations, and recovered in the seven other rivers. Salmo trutta occurred in all 13 study rivers prior to liming. Despite the improved water quality, both age 0 and age ≥ 1 year S. trutta densities decreased as S. salar density increased, with an average reduction of >50% after 10 years of liming. For age 0 year S. trutta this effect was less strong in rivers where S. salar were present prior to liming. In contrast, densities of S. trutta increased in unlimed streams above the anadromous stretches in two of the rivers following improved water quality due to natural recovery. Density increases of both age 0 and age ≥ 1 year S. salar showed a positive effect of river discharge. The results suggest that the decline in S. trutta density after liming is related to interspecific resource competition due to the recovery of S. salar. Thus, improved water quality through liming may not only sustain susceptible species, but can have a negative effect on species that are more tolerant prior to the treatment, such as S. trutta. © 2017 The Fisheries Society of the British Isles.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Wasser, Samuel K.; Hayward, Lisa S.; Hartman, Jennifer; Booth, Rebecca K.; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)–a significant threat to northern spotted owls–can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service’s Draft 2010 Survey Protocol. Seventeen 2 km ×2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl. PMID:22916175
Wasser, Samuel K; Hayward, Lisa S; Hartman, Jennifer; Booth, Rebecca K; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)-a significant threat to northern spotted owls-can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service's Draft 2010 Survey Protocol. Seventeen 2 km × 2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
The risks and returns of stock investment in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-03-01
The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,
2014-01-01
Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
A new approach to estimating trends in chlamydia incidence.
Ali, Hammad; Cameron, Ewan; Drovandi, Christopher C; McCaw, James M; Guy, Rebecca J; Middleton, Melanie; El-Hayek, Carol; Hocking, Jane S; Kaldor, John M; Donovan, Basil; Wilson, David P
2015-11-01
Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method for estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shapes of these beta parameters were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour small changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures, that is, prevalence and incidence as reported by other studies. Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ∼120% over 12 years. Nationally, an estimated 356 000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (∼90% increase). We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Amiri, N.; Polewski, P.; Yao, W.; Krzystek, P.; Skidmore, A. K.
2017-09-01
Airborne Laser Scanning (ALS) is a widespread method for forest mapping and management purposes. While common ALS techniques provide valuable information about the forest canopy and intermediate layers, the point density near the ground may be poor due to dense overstory conditions. The current study highlights a new method for detecting stems of single trees in 3D point clouds obtained from high density ALS with a density of 300 points/m2. Compared to standard ALS data, due to lower flight height (150-200 m) this elevated point density leads to more laser reflections from tree stems. In this work, we propose a three-tiered method which works on the point, segment and object levels. First, for each point we calculate the likelihood that it belongs to a tree stem, derived from the radiometric and geometric features of its neighboring points. In the next step, we construct short stem segments based on high-probability stem points, and classify the segments by considering the distribution of points around them as well as their spatial orientation, which encodes the prior knowledge that trees are mainly vertically aligned due to gravity. Finally, we apply hierarchical clustering on the positively classified segments to obtain point sets corresponding to single stems, and perform ℓ1-based orthogonal distance regression to robustly fit lines through each stem point set. The ℓ1-based method is less sensitive to outliers compared to the least square approaches. From the fitted lines, the planimetric tree positions can then be derived. Experiments were performed on two plots from the Hochficht forest in Oberösterreich region located in Austria.We marked a total of 196 reference stems in the point clouds of both plots by visual interpretation. The evaluation of the automatically detected stems showed a classification precision of 0.86 and 0.85, respectively for Plot 1 and 2, with recall values of 0.7 and 0.67.
2MASS wide-field extinction maps. V. Corona Australis
NASA Astrophysics Data System (ADS)
Alves, João; Lombardi, Marco; Lada, Charles J.
2014-05-01
We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Bremer, J. E.; Harter, T.
2012-08-01
Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).
NASA Astrophysics Data System (ADS)
Stephanik, Brian Michael
This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target
Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji
2009-01-01
In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326
Polusny, M A; Erbes, C R; Murdoch, M; Arbisi, P A; Thuras, P; Rath, M B
2011-04-01
National Guard troops are at increased risk for post-traumatic stress disorder (PTSD); however, little is known about risk and resilience in this population. The Readiness and Resilience in National Guard Soldiers Study is a prospective, longitudinal investigation of 522 Army National Guard troops deployed to Iraq from March 2006 to July 2007. Participants completed measures of PTSD symptoms and potential risk/protective factors 1 month before deployment. Of these, 81% (n=424) completed measures of PTSD, deployment stressor exposure and post-deployment outcomes 2-3 months after returning from Iraq. New onset of probable PTSD 'diagnosis' was measured by the PTSD Checklist - Military (PCL-M). Independent predictors of new-onset probable PTSD were identified using hierarchical logistic regression analyses. At baseline prior to deployment, 3.7% had probable PTSD. Among soldiers without PTSD symptoms at baseline, 13.8% reported post-deployment new-onset probable PTSD. Hierarchical logistic regression adjusted for gender, age, race/ethnicity and military rank showed that reporting more stressors prior to deployment predicted new-onset probable PTSD [odds ratio (OR) 2.20] as did feeling less prepared for deployment (OR 0.58). After accounting for pre-deployment factors, new-onset probable PTSD was predicted by exposure to combat (OR 2.19) and to combat's aftermath (OR 1.62). Reporting more stressful life events after deployment (OR 1.96) was associated with increased odds of new-onset probable PTSD, while post-deployment social support (OR 0.31) was a significant protective factor in the etiology of PTSD. Combat exposure may be unavoidable in military service members, but other vulnerability and protective factors also predict PTSD and could be targets for prevention strategies.
Colonius, Hans; Diederich, Adele
2011-07-01
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
NASA Astrophysics Data System (ADS)
Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko
We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki
2014-01-01
The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099
Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2002-01-01
A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...
Properties of Traffic Risk Coefficient
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu
2009-10-01
We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.
Probability mass first flush evaluation for combined sewer discharges.
Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong
2010-01-01
The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Statistical analysis of dislocations and dislocation boundaries from EBSD data.
Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N
2017-08-01
Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Neutron star radii, universal relations, and the role of prior distributions
Steiner, Andrew W.; Lattimer, James M.; Brown, Edward F.
2016-02-02
We investigate constraints on neutron star structure arising from the assumptions that neutron stars have crusts, that recent calculations of pure neutron matter limit the equation of state of neutron star matter near the nuclear saturation density, that the high-density equation of state is limited by causality and the largest high-accuracy neutron star mass measurement, and that general relativity is the correct theory of gravity. We explore the role of prior assumptions by considering two classes of equation of state models. In a first, the intermediate- and high-density behavior of the equation of state is parameterized by piecewise polytropes. Inmore » the second class, the high-density behavior of the equation of state is parameterized by piecewise continuous line segments. The smallest density at which high-density matter appears is varied in order to allow for strong phase transitions above the nuclear saturation density. We critically examine correlations among the pressure of matter, radii, maximum masses, the binding energy, the moment of inertia, and the tidal deformability, paying special attention to the sensitivity of these correlations to prior assumptions about the equation of state. It is possible to constrain the radii of 1.4 solar mass neutron stars to be larger than 10 km, even without consideration of additional astrophysical observations, for example, those from photospheric radius expansion bursts or quiescent low-mass X-ray binaries. We are able to improve the accuracy of known correlations between the moment of inertia and compactness as well as the binding energy and compactness. Furthermore, we also demonstrate the existence of a correlation between the neutron star binding energy and the moment of inertia.« less
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Acquired prior knowledge modulates audiovisual integration.
Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A
2010-05-01
Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.
[Cognitive and functional decline in the stage previous to the diagnosis of Alzheimers disease].
García-Sánchez, C; Estévez-González, A; Boltes, A; Otermín, P; López-Góngora, M; Gironell, A; Kulisevsky, J
2003-12-01
The decline in the phase prior to diagnosis of Alzheimers disease (AD) is not well known, although this knowledge is necessary to evaluate the efficiency of new drugs that can influence in disease course prior to diagnosis. To contribute to better knowledge of the decline prior to diagnosis, we have investigated the cognitive and functional deterioration for 2-3 years before the probable AD diagnosis was established. We compared results obtained by 17 control subjects and 27 patients at the time of diagnosis of a probable AD with results obtained 2-3 years before (interval of 27.7 4 months). We compared memory functions (logical, recognition, learning and autobiographical memory), naming, visual and visuospatial gnosis, visuoconstructive praxis, verbal fluency and the Mini-Mental State Examination (MMSE), Informant Questionnaire and Blessed's Scale scores. Performance of control subjects did not change. AD patients showed a significant decline in scores, except for verbal fluency. In order of importance, cognitive decline was more marked in scores of learning memory, visuospatial gnosis, autobiographical memory and visuoconstructive praxis. Decline prior to diagnosis of AD is characterized by an important learning memory impairment. Deterioration of visuospatial gnosis and visuoconstructive praxis is greater than deterioration of MMSE and Informant Questionnaire scores.
Esfahani, Mohammad Shahrokh; Dougherty, Edward R
2015-01-01
Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Quantum mechanical probability current as electromagnetic 4-current from topological EM fields
NASA Astrophysics Data System (ADS)
van der Mark, Martin B.
2015-09-01
Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Role of fat metabolism in exercise.
Askew, E W
1984-07-01
Fat and carbohydrate are the two major energy sources used during exercise. Either source can predominate, depending upon the duration and intensity of exercise, degree of prior physical conditioning, and the composition of the diet consumed in the days prior to a bout of exercise. Fatty acid oxidation can contribute 50 to 60 per cent of the energy expenditure during a bout of low intensity exercise of long duration. Strenuous submaximal exercise requiring 65 to 80 per cent of VO2 max will utilize less fat (10 to 45 per cent of the energy expended). Exercise training is accompanied by metabolic adaptations that occur in skeletal muscle and adipose tissue and that facilitate a greater delivery and oxidation of fatty acids during exercise. The trained state is characterized by an increased flux of fatty acids through smaller pools of adipose tissue energy. This is reflected by smaller, more metabolically active adipose cells in smaller adipose tissue depots. Peak blood concentrations of free fatty acids and ketone bodies are lower during and following exercise in trained individuals, probably due to increased capacity of the skeletal musculature to oxidize these energy sources. Trained individuals oxidize more fat and less carbohydrate than untrained subjects when performing submaximal work of the same absolute intensity. This increased capacity to utilize energy from fat conserves crucial muscle and liver glycogen stores and can contribute to increased endurance. Further benefits of the enhanced lipid metabolism accompanying chronic aerobic exercise training are decreased cardiac risk factors. Exercise training results in lower blood cholesterol and triglycerides and increased high density lipoprotein cholesterol. High-fat diets are not recommended because of their association with atherosclerotic heart disease. Recent evidence suggests that low-fat high-carbohydrate diets may increase blood triglycerides and reduce high density lipoproteins. This suggests that the chronic ingestion of diets that are extreme in their composition of either fat or carbohydrate should be approached with caution in health-conscious athletes, as well as in sedentary individuals.
Daniel Goodman’s empirical approach to Bayesian statistics
Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina
2016-01-01
Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Zhernokletov, Dmitry M; Negara, Muhammad A; Long, Rathnait D; Aloni, Shaul; Nordlund, Dennis; McIntyre, Paul C
2015-06-17
We correlate interfacial defect state densities with the chemical composition of the Al2O3/GaN interface in metal-oxide-semiconductor (MOS) structures using synchrotron photoelectron emission spectroscopy (PES), cathodoluminescence and high-temperature capacitance-voltage measurements. The influence of the wet chemical pretreatments involving (1) HCl+HF etching or (2) NH4OH(aq) exposure prior to atomic layer deposition (ALD) of Al2O3 were investigated on n-type GaN (0001) substrates. Prior to ALD, PES analysis of the NH4OH(aq) treated surface shows a greater Ga2O3 component compared to either HCl+HF treated or as-received surfaces. The lowest surface concentration of oxygen species is detected on the acid etched surface, whereas the NH4OH treated sample reveals the lowest carbon surface concentration. Both surface pretreatments improve electrical characteristics of MOS capacitors compared to untreated samples by reducing the Al2O3/GaN interface state density. The lowest interfacial trap density at energies in the upper band gap is detected for samples pretreated with NH4OH. These results are consistent with cathodoluminescence data indicating that the NH4OH treated samples show the strongest band edge emission compared to as-received and acid etched samples. PES results indicate that the combination of reduced carbon contamination while maintaining a Ga2O3 interfacial layer by NH4OH(aq) exposure prior to ALD results in fewer interface traps after Al2O3 deposition on the GaN substrate.
NASA Astrophysics Data System (ADS)
Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi
2014-07-01
{110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.
NASA Astrophysics Data System (ADS)
Isakson, Steve Wesley
2001-12-01
Well-known principles of physics explain why resolution restrictions occur in images produced by optical diffraction-limited systems. The limitations involved are present in all diffraction-limited imaging systems, including acoustical and microwave. In most circumstances, however, prior knowledge about the object and the imaging system can lead to resolution improvements. In this dissertation I outline a method to incorporate prior information into the process of reconstructing images to superresolve the object beyond the above limitations. This dissertation research develops the details of this methodology. The approach can provide the most-probable global solution employing a finite number of steps in both far-field and near-field images. In addition, in order to overcome the effects of noise present in any imaging system, this technique provides a weighted image that quantifies the likelihood of various imaging solutions. By utilizing Bayesian probability, the procedure is capable of incorporating prior information about both the object and the noise to overcome the resolution limitation present in many imaging systems. Finally I will present an imaging system capable of detecting the evanescent waves missing from far-field systems, thus improving the resolution further.
What you see is what you expect: rapid scene understanding benefits from prior experience.
Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li
2015-05-01
Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Astrophysics Data System (ADS)
Yang, P.; Ng, T. L.; Yang, W.
2015-12-01
Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Bülow, Robin; Völzke, Henry
2017-07-01
To develop the first fully automated 3D spleen segmentation framework derived from T1-weighted magnetic resonance (MR) imaging data and to verify its performance for spleen delineation and volumetry. This approach considers the issue of low contrast between spleen and adjacent tissue in non-contrast-enhanced MR images. Native T1-weighted MR volume data was performed on a 1.5 T MR system in an epidemiological study. We analyzed random subsamples of MR examinations without pathologies to develop and verify the spleen segmentation framework. The framework is modularized to include different kinds of prior knowledge into the segmentation pipeline. Classification by support vector machines differentiates between five different shape types in computed foreground probability maps and recognizes characteristic spleen regions in axial slices of MR volume data. A spleen-shape space generated by training produces subject-specific prior shape knowledge that is then incorporated into a final 3D level set segmentation method. Individually adapted shape-driven forces as well as image-driven forces resulting from refined foreground probability maps steer the level set successfully to the segment the spleen. The framework achieves promising segmentation results with mean Dice coefficients of nearly 0.91 and low volumetric mean errors of 6.3%. The presented spleen segmentation approach can delineate spleen tissue in native MR volume data. Several kinds of prior shape knowledge including subject-specific 3D prior shape knowledge can be used to guide segmentation processes achieving promising results.
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Jarmolowicz, David P; Sofis, Michael J; Darden, Alexandria C
2016-07-01
Although progressive ratio (PR) schedules have been used to explore effects of a range of reinforcer parameters (e.g., magnitude, delay), effects of reinforcer probability remain underexplored. The present project used independently progressing concurrent PR PR schedules to examine effects of reinforcer probability on PR breakpoint (highest completed ratio prior to a session terminating 300s pause) and response allocation. The probability of reinforcement on one lever remained at 100% across all conditions while the probability of reinforcement on the other lever was systematically manipulated (i.e., 100%, 50%, 25%, 12.5%, and a replication of 25%). Breakpoints systematically decreased with decreasing reinforcer probabilities while breakpoints on the control lever remained unchanged. Patterns of switching between the two levers were well described by a choice-by-choice unit price model that accounted for the hyperbolic discounting of the value of probabilistic reinforcers. Copyright © 2016 Elsevier B.V. All rights reserved.
Spacecraft Collision Avoidance
NASA Astrophysics Data System (ADS)
Bussy-Virat, Charles
The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.
On incomplete sampling under birth-death models and connections to the sampling-based coalescent.
Stadler, Tanja
2009-11-07
The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.
Investigation of the hydrogen fluxes in the plasma edge of W7-AS during H-mode discharges
NASA Astrophysics Data System (ADS)
Langer, U.; Taglauer, E.; Fischer, R.; W7-AS Team
2001-03-01
In the stellarator W7-AS the H-mode is characterized by an edge transport barrier which is localized within a few centimeters inside the separatrix. The corresponding L-H transition shows well-known features such as the steepening of the temperature and density profiles in the region of the separatrix. With a so-called sniffer probe the temporal development of the hydrogen and deuterium fluxes has been studied in the plasma edge during different H-mode discharges with deuterium gas puffing. Prior to the transition a significant reduction of the deuterium and also the hydrogen fluxes can be observed. This fact confirms the assumption that the steepening of the density profiles starts at the outermost edge of the plasma. Moreover, sniffer probe measurements in the plasma edge could therefore identify a precursor for the L-H transition. The analysis of the hydrogen neutral gases shows a distinct change of the hydrogen isotope ratio during the transition. This observation is in agreement with the change in the particle fluxes onto the targets and can also be seen in the reduced H α signals from the limiters. It is further demonstrated that significant improvement in the time resolution of the measured data can be obtained by deconvolution of the data with the apparatus function using Bayesian probability theory and the Maximum Entropy method with adaptive kernels.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.