NASA Astrophysics Data System (ADS)
Mouchtouris, S.; Kokkoris, G.
2018-01-01
A generalized equation for the electron energy probability function (EEPF) of inductively coupled Ar plasmas is proposed under conditions of nonlocal electron kinetics and diffusive cooling. The proposed equation describes the local EEPF in a discharge and the independent variable is the kinetic energy of electrons. The EEPF consists of a bulk and a depleted tail part and incorporates the effect of the plasma potential, Vp, and pressure. Due to diffusive cooling, the break point of the EEPF is eVp. The pressure alters the shape of the bulk and the slope of the tail part. The parameters of the proposed EEPF are extracted by fitting to measure EEPFs (at one point in the reactor) at different pressures. By coupling the proposed EEPF with a hybrid plasma model, measurements in the gaseous electronics conference reference reactor concerning (a) the electron density and temperature and the plasma potential, either spatially resolved or at different pressure (10-50 mTorr) and power, and (b) the ion current density of the electrode, are well reproduced. The effect of the choice of the EEPF on the results is investigated by a comparison to an EEPF coming from the Boltzmann equation (local electron kinetics approach) and to a Maxwellian EEPF. The accuracy of the results and the fact that the proposed EEPF is predefined renders its use a reliable alternative with a low computational cost compared to stochastic electron kinetic models at low pressure conditions, which can be extended to other gases and/or different electron heating mechanisms.
External control of electron energy distributions in a dual tandem inductively coupled plasma
NASA Astrophysics Data System (ADS)
Liu, Lei; Sridhar, Shyam; Zhu, Weiye; Donnelly, Vincent M.; Economou, Demetre J.; Logue, Michael D.; Kushner, Mark J.
2015-08-01
The control of electron energy probability functions (EEPFs) in low pressure partially ionized plasmas is typically accomplished through the format of the applied power. For example, through the use of pulse power, the EEPF can be modulated to produce shapes not possible under continuous wave excitation. This technique uses internal control. In this paper, we discuss a method for external control of EEPFs by transport of electrons between separately powered inductively coupled plasmas (ICPs). The reactor incorporates dual ICP sources (main and auxiliary) in a tandem geometry whose plasma volumes are separated by a grid. The auxiliary ICP is continuously powered while the main ICP is pulsed. Langmuir probe measurements of the EEPFs during the afterglow of the main ICP suggests that transport of hot electrons from the auxiliary plasma provided what is effectively an external source of energetic electrons. The tail of the EEPF and bulk electron temperature were then elevated in the afterglow of the main ICP by this external source of power. Results from a computer simulation for the evolution of the EEPFs concur with measured trends.
NASA Astrophysics Data System (ADS)
Lee, Jung Yeol; Verboncoeur, John P.; Lee, Hae June
2018-04-01
The transition of electron energy probability functions (EEPFs) through the change of heating mode is an important issue in plasma science. A well-known example is that the increase of gas pressure, which was analyzed in terms of the ratio of the energy relaxation mean free path to the electrode gap distance, changes the EEPF from bi-Maxwellian to Maxwellian or Druyvesteyn. In this study, a new aspect of the temporal decay of kinetic energy during the energy relaxation time is theoretically analyzed and compared with a particle-in-cell Monte Carlo collision simulation of capacitively coupled plasmas. A fully kinetic description of electron transport and collisions shows drastic changes of EEPFs with the variation of the driving frequency due to the heating mode transition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Seuli; Kang, Hyun-Ju; Kim, Yu-Sin
2016-06-15
The electron bounce resonance was experimentally investigated in a low pressure planar inductively coupled plasma. The electron energy probability functions (EEPFs) were measured at different chamber heights and the energy diffusion coefficients were calculated by the kinetic model. It is found that the EEPFs begin to flatten at the first electron bounce resonance condition, and the plateau shifts to a higher electron energy as the chamber height increases. The plateau which indicates strong electron heating corresponds not only to the electron bounce resonance condition but also to the peaks of the first component of the energy diffusion coefficients. As amore » result, the plateau formation in the EEPFs is mainly due to the electron bounce resonance in a finite inductive discharge.« less
Non-Maxwellian electron energy probability functions in the plume of a SPT-100 Hall thruster
NASA Astrophysics Data System (ADS)
Giono, G.; Gudmundsson, J. T.; Ivchenko, N.; Mazouffre, S.; Dannenmayer, K.; Loubère, D.; Popelier, L.; Merino, M.; Olentšenko, G.
2018-01-01
We present measurements of the electron density, the effective electron temperature, the plasma potential, and the electron energy probability function (EEPF) in the plume of a 1.5 kW-class SPT-100 Hall thruster, derived from cylindrical Langmuir probe measurements. The measurements were taken on the plume axis at distances between 550 and 1550 mm from the thruster exit plane, and at different angles from the plume axis at 550 mm for three operating points of the thruster, characterized by different discharge voltages and mass flow rates. The bulk of the electron population can be approximated as a Maxwellian distribution, but the measured distributions were seen to decline faster at higher energy. The measured EEPFs were best modelled with a general EEPF with an exponent α between 1.2 and 1.5, and their axial and angular characteristics were studied for the different operating points of the thruster. As a result, the exponent α from the fitted distribution was seen to be almost constant as a function of the axial distance along the plume, as well as across the angles. However, the exponent α was seen to be affected by the mass flow rate, suggesting a possible relationship with the collision rate, especially close to the thruster exit. The ratio of the specific heats, the γ factor, between the measured plasma parameters was found to be lower than the adiabatic value of 5/3 for each of the thruster settings, indicating the existence of non-trivial kinetic heat fluxes in the near collisionless plume. These results are intended to be used as input and/or testing properties for plume expansion models in further work.
The frequency dependence of the discharge properties in a capacitively coupled oxygen discharge
NASA Astrophysics Data System (ADS)
Gudmundsson, J. T.; Snorrason, D. I.; Hannesdottir, H.
2018-02-01
We use the one-dimensional object-oriented particle-in-cell Monte Carlo collision code oopd1 to explore the evolution of the charged particle density profiles, electron heating mechanism, the electron energy probability function (EEPF), and the ion energy distribution in a single frequency capacitively coupled oxygen discharge, with driving frequency in the range 12-100 MHz. At a low driving frequency and low pressure (5 and 10 mTorr), a combination of stochastic (α-mode) and drift ambipolar (DA) heating in the bulk plasma (the electronegative core) is observed and the DA-mode dominates the time averaged electron heating. As the driving frequency or pressure are increased, the heating mode transitions into a pure α-mode, where electron heating in the sheath region dominates. At low pressure (5 and 10 mTorr), this transition coincides with a sharp decrease in electronegativity. At low pressure and low driving frequency, the EEPF is concave. As the driving frequency is increased, the number of low energy electrons increases and the relative number of higher energy electrons (>10 eV) increases. At high driving frequency, the EEPF develops a convex shape or becomes bi-Maxwellian.
NASA Astrophysics Data System (ADS)
Pandey, Shail; Nath Patel, Dudh; Ram Baitha, Anuj; Bhattacharjee, Sudeep
2015-12-01
The electron energies and its distribution function are measured in non-equilibrium transient pulsed microwave plasmas in the interpulse regime using a retarding field electron energy analyzer. The plasmas are driven to different initial conditions by varying the electromagnetic (EM) wave pulse duration, peak power, or the wave frequency. Two cases of wave excitation are investigated: (i) short-pulse (pulse duration, t w ~ 1 μs), high-power (~60 kW) waves of 9.45 GHz and (ii) medium-pulse (t w ~ 20 μs), and moderate power waves of ~3 kW at 2.45 GHz. It is found that high-power, short-duration pulses lead to a significantly different electron energy probability function (EEPF) in the interpulse phase—a Maxwellian with a bump on the tail, although the average energy per pulse (~60 mJ) is maintained the same in the two modes of wave excitation. Electrons with energies >250 eV are found to exist in the discharge in the both cases. Another subset of experiments is performed to delineate the effect of the wave frequency and the peak power on EEPF. A traveling wave tube (TWT) amplifier based microwave source for generating pulsed plasma (t w = 230 μs) in a wide frequency range (6-18 GHz) is employed for this purpose. Further experiments on measurements of metastable density using optical emission spectroscopy and ion energy analyzer have been carried out. By tailoring the EEPF of the transient plasma and metastable densities, new applications in plasma processing, chemistry and biology can be realized in the interpulse phase of the discharge.
Development of Simple Designs of Multitip Probe Diagnostic Systems for RF Plasma Characterization
Naz, M. Y.; Shukrullah, S.; Ghaffar, A.; Rehman, N. U.
2014-01-01
Multitip probes are very useful diagnostics for analyzing and controlling the physical phenomena occurring in low temperature discharge plasmas. However, DC biased probes often fail to perform well in processing plasmas. The objective of the work was to deduce simple designs of DC biased multitip probes for parametric study of radio frequency plasmas. For this purpose, symmetric double probe, asymmetric double probe, and symmetric triple probe diagnostic systems and their driving circuits were designed and tested in an inductively coupled plasma (ICP) generated by a 13.56 MHz radio frequency (RF) source. Using I-V characteristics of these probes, electron temperature, electron number density, and ion saturation current was measured as a function of input power and filling gas pressure. An increasing trend was noticed in electron temperature and electron number density for increasing input RF power whilst a decreasing trend was evident in these parameters when measured against filling gas pressure. In addition, the electron energy probability function (EEPF) was also studied by using an asymmetric double probe. These studies confirmed the non-Maxwellian nature of the EEPF and the presence of two groups of the energetic electrons at low filling gas pressures. PMID:24683326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Shu-Xia; Zhang, Yu-Ru; Research Group PLASMANT, Department of Chemistry, University of Antwerp, Universiteitsplein 1, B-2610 Antwerp
A hybrid model is used to investigate the fragmentation of C{sub 4}F{sub 8} inductive discharges. Indeed, the resulting reactive species are crucial for the optimization of the Si-based etching process, since they determine the mechanisms of fluorination, polymerization, and sputtering. In this paper, we present the dissociation degree, the density ratio of F vs. C{sub x}F{sub y} (i.e., fluorocarbon (fc) neutrals), the neutral vs. positive ion density ratio, details on the neutral and ion components, and fractions of various fc neutrals (or ions) in the total fc neutral (or ion) density in a C{sub 4}F{sub 8} inductively coupled plasma source,more » as well as the effect of pressure and power on these results. To analyze the fragmentation behavior, the electron density and temperature and electron energy probability function (EEPF) are investigated. Moreover, the main electron-impact generation sources for all considered neutrals and ions are determined from the complicated C{sub 4}F{sub 8} reaction set used in the model. The C{sub 4}F{sub 8} plasma fragmentation is explained, taking into account many factors, such as the EEPF characteristics, the dominance of primary and secondary processes, and the thresholds of dissociation and ionization. The simulation results are compared with experiments from literature, and reasonable agreement is obtained. Some discrepancies are observed, which can probably be attributed to the simplified polymer surface kinetics assumed in the model.« less
Bulk plasma fragmentation in a C4F8 inductively coupled plasma: A hybrid modeling study
NASA Astrophysics Data System (ADS)
Zhao, Shu-Xia; Zhang, Yu-Ru; Gao, Fei; Wang, You-Nian; Bogaerts, Annemie
2015-06-01
A hybrid model is used to investigate the fragmentation of C4F8 inductive discharges. Indeed, the resulting reactive species are crucial for the optimization of the Si-based etching process, since they determine the mechanisms of fluorination, polymerization, and sputtering. In this paper, we present the dissociation degree, the density ratio of F vs. CxFy (i.e., fluorocarbon (fc) neutrals), the neutral vs. positive ion density ratio, details on the neutral and ion components, and fractions of various fc neutrals (or ions) in the total fc neutral (or ion) density in a C4F8 inductively coupled plasma source, as well as the effect of pressure and power on these results. To analyze the fragmentation behavior, the electron density and temperature and electron energy probability function (EEPF) are investigated. Moreover, the main electron-impact generation sources for all considered neutrals and ions are determined from the complicated C4F8 reaction set used in the model. The C4F8 plasma fragmentation is explained, taking into account many factors, such as the EEPF characteristics, the dominance of primary and secondary processes, and the thresholds of dissociation and ionization. The simulation results are compared with experiments from literature, and reasonable agreement is obtained. Some discrepancies are observed, which can probably be attributed to the simplified polymer surface kinetics assumed in the model.
NASA Astrophysics Data System (ADS)
Bilik, Narula
This dissertation research focuses on the experimental characterization of dust-plasma interactions at both low and atmospheric pressure. Its goal is to fill the knowledge gaps in (1) the fundamental research of low pressure dusty plasma electrons, which mainly relied on models with few experimental results; and (2) the nanoparticle synthesis process in atmospheric pressure uniform glow plasmas (APGDs), which is largely unexplored in spite of the economical advantage of APGDs in nanotechnology. The low pressure part of the dissertation research involves the development of a complete diagnostic process for an argon-siline capacitively-coupled RF plasma. The central part of the diagnostic process is the Langmuir probe measurement of the electron energy probability function (EEPF) in a dusty plasma, which has never been measured before. This is because the dust particles in the plasma cause severe probe surface contamination and consequently distort the measurement. This problem is solved by adding a solenoid-actuated shield structure to the Langmuir probe, which physically protects the Langmuir probe from the dust particle deposition to ensure reliable EEPF measurements. The dusty plasma EEPFs are characterized by lower electron density and higher electron temperature accompanied by a drop in the low energy electron population. The Langmuir probe measurement is complemented with other characterizations including the capacitive probe measurement, power measurement, and dust particle collection. The complete diagnostic process then gives a set of local plasma parameters as well as the details of the dust-electron interactions reflected in the EEPFs. This set of data serves as input for an analytical model of nanoparticle charging to yield the time evolution of nanoparticle size and charge in the dusty plasma. The atmospheric pressure part of the dissertation focuses on the design and development of an APGD for zinc oxide nanocrystal synthesis. One of the main difficulties in maintaining an APGD is ensuring its uniformity over large discharge volume. By examining past atmospheric pressure plasma reactor designs and looking into the details of the atmospheric pressure gas breakdown mechanism, three design features are proposed to ensure the APGD uniformity. These include the use of a dielectric barrier and the RF driving frequency, as well as a pre-ionization technique achieved by having a non-uniform gap spacing in a capacitively-coupled concentric cylinder reactor. The resulting APGD reactor operates stably in the abnormal glow regime using either helium or argon as the carrier gas. Diethylzinc (DEZ) and oxygen precursors are injected into the APGD to form zinc oxide nanocrystals. The physical and optical properties of these nanocrystals are characterized, and the system parameters that impact the nanoparticle size and deposition rate are identified.
Diagnosing the Fine Structure of Electron Energy Within the ECRIT Ion Source
NASA Astrophysics Data System (ADS)
Jin, Yizhou; Yang, Juan; Tang, Mingjie; Luo, Litao; Feng, Bingbing
2016-07-01
The ion source of the electron cyclotron resonance ion thruster (ECRIT) extracts ions from its ECR plasma to generate thrust, and has the property of low gas consumption (2 sccm, standard-state cubic centimeter per minute) and high durability. Due to the indispensable effects of the primary electron in gas discharge, it is important to experimentally clarify the electron energy structure within the ion source of the ECRIT through analyzing the electron energy distribution function (EEDF) of the plasma inside the thruster. In this article the Langmuir probe diagnosing method was used to diagnose the EEDF, from which the effective electron temperature, plasma density and the electron energy probability function (EEPF) were deduced. The experimental results show that the magnetic field influences the curves of EEDF and EEPF and make the effective plasma parameter nonuniform. The diagnosed electron temperature and density from sample points increased from 4 eV/2×1016 m-3 to 10 eV/4×1016 m-3 with increasing distances from both the axis and the screen grid of the ion source. Electron temperature and density peaking near the wall coincided with the discharge process. However, a double Maxwellian electron distribution was unexpectedly observed at the position near the axis of the ion source and about 30 mm from the screen grid. Besides, the double Maxwellian electron distribution was more likely to emerge at high power and a low gas flow rate. These phenomena were believed to relate to the arrangements of the gas inlets and the magnetic field where the double Maxwellian electron distribution exits. The results of this research may enhance the understanding of the plasma generation process in the ion source of this type and help to improve its performance. supported by National Natural Science Foundation of China (No. 11475137)
Experimental evaluation of Langmuir probe sheath potential coefficient on the HL-2A tokamak
NASA Astrophysics Data System (ADS)
Nie, L.; Xu, M.; Ke, R.; Yuan, B. D.; Wu, Y. F.; Cheng, J.; Lan, T.; Yu, Y.; Hong, R. J.; Guo, D.; Ting, L.; Dong, Y. B.; Zhang, Y. P.; Song, X. M.; Zhong, W. L.; Wang, Z. H.; Sun, A. P.; Xu, J. Q.; Chen, W.; Yan, L. W.; Zou, X. L.; Duan, X. R.; HL-2A Team
2018-03-01
Systematic calibration experiment of Langmuir probe sheath potential coefficient Λ, which is a critical coefficient for estimating plasma sheath potential, has been carried out in the HL-2A tokamak deuterium plasmas. The electron energy probability function (EEPF) shows that electron outside last-closed-flux-surface (LCFS) is Maxwell distribution, but inside LCFS it changes to bi-Maxwell. Two kinds of plasma potential measuring method and three kinds Λ estmating method were compared. It is found that the estimated Λ coefficient is in the region of 2-3 outside LCFS and then increases to ~5 inside LCFS due to the high temperature electron effect. Fortunately, the results show that the commonly used value Λ = 2.8 is still available to calculate plasma potential when we use the overestimated electron temperature measured by three-tip probe in bi-Maxwell case. Further analysis indicated this value should be corrected. Or it may lead to a error when we calculate the the electric field {{E}r} and its shear d{{E}r}/dr . The corrected value monotonically increased from ~2.2 to ~2.9 while Langmuir probe moved from 40 mm outside LCFS to 20 mm inside LCFS.
Hybrid Modeling of SiH4/Ar Discharge in a Pulse Modulated RF Capacitively Coupled Plasma
NASA Astrophysics Data System (ADS)
Xi-Feng, Wang; Yuan-Hong, Song; You-Nian, Wang; PSEG Team
2015-09-01
Pulsed plasmas have offered important advantages in future micro-devices, especially for electronegative gas plasmas. In this work, a one-dimensional fluid and Monte-Carlo (MC) hybrid model is developed to simulate SiH4/Ar discharge in a pulse modulated radio-frequency (RF) capacitively coupled plasma (CCP). Time evolution densities of different species, such as electrons, ions, radicals, are calculated, as well as the electron energy probability function (EEPF) which is obtained by a MC simulation. By pulsing the RF source, the electron energy distributions and plasma properties can be modulated by pulse frequency and duty cycle. High electron energy tails are obtained during power-on period, with the SiHx densities increasing rapidly mainly by SiH4 dissociation. As the RF power is off, the densities in the bulk region decrease rapidly owing to high energy electrons disappear, but increase near electrodes since diffusion without the confinement of high electric field, which can prolong the time of radials deposition on the plate. Especially, in the afterglow, the increase of negative ions near the electrodes results from cool electron attachment, which are good for film deposition. This work was supported by the National Natural Science Foundation of China (Grant No. 11275038).
NASA Astrophysics Data System (ADS)
Akatsuka, Hiroshi; Tanaka, Yoshinori
2016-09-01
We reconsider electron temperature of non-equilibrium plasmas on the basis of thermodynamics and statistical physics. Following our previous study on the oxygen plasma in GEC 2015, we discuss the common issue for the nitrogen plasma. First, we solve the Boltzmann equation to obtain the electron energy distribution function (EEDF) F(ɛ) of the nitrogen plasma as a function of the reduced electric field E / N . We also simultaneously solve the chemical kinetic equations of some essential excite species of nitrogen molecules and atoms, including vibrational distribution function (VDF). Next, we calculate the electron mean energy as U = < ɛ > =∫0∞ɛF(ɛ) dɛ and entropy S = - k∫0∞F(ɛ) ln [ F(ɛ) ] dɛ for each value of E / N . Then, we can obtain the electron temperature as Testat =[ ∂S / ∂U ] - 1 . After that, we discuss the difference between Testat and the kinetic temperature Tekin ≡(2 / 3) < ɛ > , as well as the temperature given as a slope of the calculated EEDF for each value of E / N . We found Testat is close to the slope at ɛ 4 eV in the EEPF.
Psychophysics of the probability weighting function
NASA Astrophysics Data System (ADS)
Takahashi, Taiki
2011-03-01
A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Computation of the Complex Probability Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trainer, Amelia Jo; Ledwith, Patrick John
The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function
ERIC Educational Resources Information Center
Fennell, John; Baddeley, Roland
2012-01-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Economic Choices Reveal Probability Distortion in Macaque Monkeys
Lak, Armin; Bossaerts, Peter; Schultz, Wolfram
2015-01-01
Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750
Economic choices reveal probability distortion in macaque monkeys.
Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram
2015-02-18
Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Modeling the effect of reward amount on probability discounting.
Myerson, Joel; Green, Leonard; Morris, Joshua
2011-03-01
The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.
Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.
Saveljev, Vladimir; Kim, Sung-Kyu
2015-10-05
The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.
Decision making generalized by a cumulative probability weighting function
NASA Astrophysics Data System (ADS)
dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto
2018-01-01
Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew
Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less
Organic Over-the-Horizon Targeting for the 2025 Surface Fleet
2015-06-01
Detection Phit Probability of Hit Pk Probability of Kill PLAN People’s Liberation Army Navy PMEL Pacific Marine Environmental Laboratory...probability of hit ( Phit ). 2. Top-Level Functional Flow Block Diagram With the high-level functions of the project’s systems of systems properly
Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.
Fennell, John; Baddeley, Roland
2012-10-01
Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.
1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.
1980-11-01
Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
On defense strategies for system of systems using aggregated correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.
2017-04-01
We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Sloma, Michael F.; Mathews, David H.
2016-01-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. PMID:27852924
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Delay, Probability, and Social Discounting in a Public Goods Game
ERIC Educational Resources Information Center
Jones, Bryan A.; Rachlin, Howard
2009-01-01
A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual…
Electron number probability distributions for correlated wave functions.
Francisco, E; Martín Pendás, A; Blanco, M A
2007-03-07
Efficient formulas for computing the probability of finding exactly an integer number of electrons in an arbitrarily chosen volume are only known for single-determinant wave functions [E. Cances et al., Theor. Chem. Acc. 111, 373 (2004)]. In this article, an algebraic method is presented that extends these formulas to the case of multideterminant wave functions and any number of disjoint volumes. The derived expressions are applied to compute the probabilities within the atomic domains derived from the space partitioning based on the quantum theory of atoms in molecules. Results for a series of test molecules are presented, paying particular attention to the effects of electron correlation and of some numerical approximations on the computed probabilities.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
Chemtob, Claude M; Pat-Horenczyk, Ruth; Madan, Anita; Pitman, Seth R; Wang, Yanping; Doppelt, Osnat; Burns, Kelly Dugan; Abramovitz, Robert; Brom, Daniel
2011-12-01
In this study, we examined the relationships among terrorism exposure, functional impairment, suicidal ideation, and probable partial or full posttraumatic stress disorder (PTSD) from exposure to terrorism in adolescents continuously exposed to this threat in Israel. A convenience sample of 2,094 students, aged 12 to 18, was drawn from 10 Israeli secondary schools. In terms of demographic factors, older age was associated with increased risk for suicidal ideation, OR = 1.33, 95% CI [1.09, 1.62], p < .01, but was protective against probable partial or full PTSD, OR = 0.72, 95% CI [0.54, 0.95], p < .05; female gender was associated with greater likelihood of probable partial or full PTSD, OR = 1.57, 95% CI [1.02, 2.40], p < .05. Exposure to trauma due to terrorism was associated with increased risk for each of the measured outcomes including probable partial or full PTSD, functional impairment, and suicidal ideation. When age, gender, level of exposure to terrorism, probable partial or full PTSD, and functional impairment were examined together, only terrorism exposure and functional impairment were associated with suicidal ideation. This study underscores the importance and feasibility of examining exposure to terrorism and functional impairment as risk factors for suicidal ideation. Copyright © 2011 International Society for Traumatic Stress Studies.
Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD
Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne
2014-01-01
We recorded visual event-related brain potentials (ERPs) from 32 adult male participants (16 high-functioning participants diagnosed with Autism Spectrum Disorder (ASD) and 16 control participants, ranging in age from 18–53 yrs) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability conditions, whereas the probability of a third non-target stimulus was held constant in all conditions. P3 amplitude to target stimuli was more sensitive to probability in ASD than in TD participants, whereas P3 amplitude to non-target stimuli was less responsive to probability in ASD participants. This suggests that neural responses to changes in event probability are attention-dependant in high-functioning ASD. The implications of these findings for higher-level behaviors such as prediction and planning are discussed. PMID:24488156
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, A
2016-06-15
Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Sloma, Michael F; Mathews, David H
2016-12-01
RNA secondary structure prediction is widely used to analyze RNA sequences. In an RNA partition function calculation, free energy nearest neighbor parameters are used in a dynamic programming algorithm to estimate statistical properties of the secondary structure ensemble. Previously, partition functions have largely been used to estimate the probability that a given pair of nucleotides form a base pair, the conditional stacking probability, the accessibility to binding of a continuous stretch of nucleotides, or a representative sample of RNA structures. Here it is demonstrated that an RNA partition function can also be used to calculate the exact probability of formation of hairpin loops, internal loops, bulge loops, or multibranch loops at a given position. This calculation can also be used to estimate the probability of formation of specific helices. Benchmarking on a set of RNA sequences with known secondary structures indicated that loops that were calculated to be more probable were more likely to be present in the known structure than less probable loops. Furthermore, highly probable loops are more likely to be in the known structure than the set of loops predicted in the lowest free energy structures. © 2016 Sloma and Mathews; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior
Borrero, Carrie S.W; Borrero, John C
2008-01-01
We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281
Falck, Ryan S; Landry, Glenn J; Best, John R; Davis, Jennifer C; Chiu, Bryan K; Liu-Ambrose, Teresa
2017-10-01
Mild cognitive impairment (MCI) represents a transition between normal cognitive aging and dementia and may represent a critical time frame for promoting cognitive health through behavioral strategies. Current evidence suggests that physical activity (PA) and sedentary behavior are important for cognition. However, it is unclear whether there are differences in PA and sedentary behavior between people with probable MCI and people without MCI or whether the relationships of PA and sedentary behavior with cognitive function differ by MCI status. The aims of this study were to examine differences in PA and sedentary behavior between people with probable MCI and people without MCI and whether associations of PA and sedentary behavior with cognitive function differed by MCI status. This was a cross-sectional study. Physical activity and sedentary behavior in adults dwelling in the community (N = 151; at least 55 years old) were measured using a wrist-worn actigraphy unit. The Montreal Cognitive Assessment was used to categorize participants with probable MCI (scores of <26/30) and participants without MCI (scores of ≥26/30). Cognitive function was indexed using the Alzheimer Disease Assessment Scale-Cognitive-Plus (ADAS-Cog Plus). Physical activity and sedentary behavior were compared based on probable MCI status, and relationships of ADAS-Cog Plus with PA and sedentary behavior were examined by probable MCI status. Participants with probable MCI (n = 82) had lower PA and higher sedentary behavior than participants without MCI (n = 69). Higher PA and lower sedentary behavior were associated with better ADAS-Cog Plus performance in participants without MCI (β = -.022 and β = .012, respectively) but not in participants with probable MCI (β < .001 for both). This study was cross-sectional and therefore could not establish whether conversion to MCI attenuated the relationships of PA and sedentary behavior with cognitive function. The diagnosis of MCI was not confirmed with a physician; therefore, this study could not conclude how many of the participants categorized as having probable MCI would actually have been diagnosed with MCI by a physician. Participants with probable MCI were less active and more sedentary. The relationships of these behaviors with cognitive function differed by MCI status; associations were found only in participants without MCI. © 2017 American Physical Therapy Association
Negative values of quasidistributions and quantum wave and number statistics
NASA Astrophysics Data System (ADS)
Peřina, J.; Křepelka, J.
2018-04-01
We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.
Xu, X J; Wang, L L; Zhou, N
2016-02-23
To explore the characteristics of ecological executive function in school-aged children with idiopathic or probably symptomatic epilepsy and examine the effects of executive function on social adaptive function. A total of 51 school-aged children with idiopathic or probably symptomatic epilepsy aged 5-12 years at our hospital and 37 normal ones of the same gender, age and educational level were included. The differences in ecological executive function and social adaptive function were compared between the two groups with the Behavior Rating Inventory of Executive Function (BRIEF) and Child Adaptive Behavior Scale, the Pearson's correlation test and multiple stepwise linear regression were used to explore the impact of executive function on social adaptive function. The scores of school-aged children with idiopathic or probably symptomatic epilepsy in global executive composite (GEC), behavioral regulation index (BRI) and metacognition index (MI) of BRIEF ((62±12), (58±13) and (63±12), respectively) were significantly higher than those of the control group ((47±7), (44±6) and (48±8), respectively))(P<0.01). The scores of school-aged children with idiopathic or probably symptomatic epilepsy in adaptive behavior quotient (ADQ), independence, cognition, self-control ((86±22), (32±17), (49±14), (41±16), respectively) were significantly lower than those of the control group ((120±12), (59±14), (59±7) and (68±10), respectively))(P<0.01). Pearson's correlation test showed that the scores of BRIEF, such as GEC, BRI, MI, inhibition, emotional control, monitoring, initiation and working memory had significantly negative correlations with the score of ADQ, independence, self-control ((r=-0.313--0.741, P<0.05)). Also, GEC, inhibition, MI, initiation, working memory, plan, organization and monitoring had significantly negative correlations with the score of cognition ((r=-0.335--0.437, P<0.05)); Multiple stepwise linear regression analysis showed that BRI, inhibition and working memory were closely related with the social adaptive function of school-aged children with idiopathic or probably symptomatic epilepsy. School-aged children with idiopathic or probably symptomatic epilepsy may have significantly ecological executive function impairment and social adaptive function reduction. The aspects of BRI, inhibition and working memory in ecological executive function are significantly related with social adaptive function in school-aged children with epilepsy.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Delay, probability, and social discounting in a public goods game.
Jones, Bryan A; Rachlin, Howard
2009-01-01
A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual social, delay, and probability discount functions for a hypothetical $75 reward; participants also indicated how much of an initial $100 endowment they would contribute to a common investment in a public good. Steepness of discounting correlated, across participants, among all three discount dimensions. However, only social and probability discounting were correlated with the public-good contribution; high public-good contributors were more altruistic and also less risk averse than low contributors. Experiment 2 obtained social discount functions with hypothetical $75 rewards and delay discount functions with hypothetical $1,000 rewards, as well as public-good contributions. The results replicated those of Experiment 1; steepness of the two forms of discounting correlated with each other across participants but only social discounting correlated with the public-good contribution. Most participants in Experiment 2 predicted that the average contribution would be lower than their own contribution.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Singular solution of the Feller diffusion equation via a spectral decomposition.
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Singular solution of the Feller diffusion equation via a spectral decomposition
NASA Astrophysics Data System (ADS)
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
[Determinants of pride and shame: outcome, expected success and attribution].
Schützwohl, A
1991-01-01
In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinogradskiy, Y; Miyasaka, Y; Kadoya, N
Purpose: CT-ventilation is an exciting new imaging modality that uses 4DCTs to calculate lung ventilation. Studies have proposed to use 4DCT-ventilation imaging for functional avoidance radiotherapy which implies designing treatment plans to spare functional portions of the lung. Although retrospective studies have been performed to evaluate the dosimetric gains to functional lung; no work has been done to translate the dosimetric gains to an improvement in pulmonary toxicity. The purpose of our work was to evaluate the potential reduction in toxicity for 4DCT-ventilation based functional avoidance. Methods: 70 lung cancer patients with 4DCT imaging were used for the study. CT-ventilationmore » maps were calculated using the patient’s 4DCT, deformable image registrations, and a density-change-based algorithm. Radiation pneumonitis was graded using imaging and clinical information. Log-likelihood methods were used to fit a normal-tissue-complication-probability (NTCP) model predicting grade 2+ radiation pneumonitis as a function of doses (mean and V20) to functional lung (>15% ventilation). For 20 patients a functional plan was generated that reduced dose to functional lung while meeting RTOG 0617-based constraints. The NTCP model was applied to the functional plan to determine the reduction in toxicity with functional planning Results: The mean dose to functional lung was 16.8 and 17.7 Gy with the functional and clinical plans respectively. The corresponding grade 2+ pneumonitis probability was 26.9% with the clinically-used plan and 24.6% with the functional plan (8.5% reduction). The V20-based grade 2+ pneumonitis probability was 23.7% with the clinically-used plan and reduced to 19.6% with the functional plan (20.9% reduction). Conclusion: Our results revealed a reduction of 9–20% in complication probability with functional planning. To our knowledge this is the first study to apply complication probability to convert dosimetric results to toxicity improvement. The results presented in the current work provide seminal data for prospective clinical trials in functional avoidance. YV discloses funding from State of Colorado. TY discloses National Lung Cancer Partnership; Young Investigator Research grant.« less
Functional-diversity indices can be driven by methodological choices and species richness.
Poos, Mark S; Walker, Steven C; Jackson, Donald A
2009-02-01
Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Optimized Vertex Method and Hybrid Reliability
NASA Technical Reports Server (NTRS)
Smith, Steven A.; Krishnamurthy, T.; Mason, B. H.
2002-01-01
A method of calculating the fuzzy response of a system is presented. This method, called the Optimized Vertex Method (OVM), is based upon the vertex method but requires considerably fewer function evaluations. The method is demonstrated by calculating the response membership function of strain-energy release rate for a bonded joint with a crack. The possibility of failure of the bonded joint was determined over a range of loads. After completing the possibilistic analysis, the possibilistic (fuzzy) membership functions were transformed to probability density functions and the probability of failure of the bonded joint was calculated. This approach is called a possibility-based hybrid reliability assessment. The possibility and probability of failure are presented and compared to a Monte Carlo Simulation (MCS) of the bonded joint.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
What is the correct cost functional for variational data assimilation?
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2018-03-01
Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.
Chevallier, Maguelonne; Krauth, Werner
2007-11-01
We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.
Ellefsen, Karl J.
2017-06-27
MapMark4 is a software package that implements the probability calculations in three-part mineral resource assessments. Functions within the software package are written in the R statistical programming language. These functions, their documentation, and a copy of this user’s guide are bundled together in R’s unit of shareable code, which is called a “package.” This user’s guide includes step-by-step instructions showing how the functions are used to carry out the probability calculations. The calculations are demonstrated using test data, which are included in the package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Clinical diagnosis of pneumonia, typical of experts.
Miettinen, Olli S; Flegel, Kenneth M; Steurer, Johann
2008-04-01
Clinical diagnosis of pneumonia is a concern when a patient presents with recent cough--new or worsened--together with fever as the chief complaint. Given this presentation, the doctor would benefit from having access to software that specifies, first, what diagnostic indicators experts typically use in that diagnosis; then, upon entry of those facts, what experts' typical probability of pneumonia is in such a case; and finally, how much this probability might change upon adding the facts from chest radiography. We specified a set of 36 hypothetical presentations of this type by patients 20-70 years of age, involving a comprehensive set of clinical-diagnostic indicators. Members of three separate expert panels independently set the probability of pneumonia in each of these cases, and also the range of possible post-radiography probabilities. A logistic function of the diagnostic indicators was fitted to the medians of the probabilities. The median probability of pneumonia was a joint function of the patient's age and current rate of cigarette smoking; history as to the cough's duration, the fever's maximum, dyspnea (including whether on effort only) and rigors; and physical examination as to temperature, signs of upper respiratory infection, prolongation of expiration, dullness on percussion and some auscultation findings. Non-contributory were history of wheezing, pain on inspiration, type of sputum and signs of cold or influenza. This probability function, and the post-radiography functions based on the same indicators, are accessible at http://www.evimed.ch/pneumonia. The expert inputs to clinical diagnosis that were derived and made readily accessible provide for expertly clinical diagnosis of pneumonia, relevant for decisions about radiography and treatment without it.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A rational decision rule with extreme events.
Basili, Marcello
2006-12-01
Risks induced by extreme events are characterized by small or ambiguous probabilities, catastrophic losses, or windfall gains. Through a new functional, that mimics the restricted Bayes-Hurwicz criterion within the Choquet expected utility approach, it is possible to represent the decisionmaker behavior facing both risky (large and reliable probability) and extreme (small or ambiguous probability) events. A new formalization of the precautionary principle (PP) is shown and a new functional, which encompasses both extreme outcomes and expectation of all the possible results for every act, is claimed.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Probability of survival during accidental immersion in cold water.
Wissler, Eugene H
2003-01-01
Estimating the probability of survival during accidental immersion in cold water presents formidable challenges for both theoreticians and empirics. A number of theoretical models have been developed assuming that death occurs when the central body temperature, computed using a mathematical model, falls to a certain level. This paper describes a different theoretical approach to estimating the probability of survival. The human thermal model developed by Wissler is used to compute the central temperature during immersion in cold water. Simultaneously, a survival probability function is computed by solving a differential equation that defines how the probability of survival decreases with increasing time. The survival equation assumes that the probability of occurrence of a fatal event increases as the victim's central temperature decreases. Generally accepted views of the medical consequences of hypothermia and published reports of various accidents provide information useful for defining a "fatality function" that increases exponentially with decreasing central temperature. The particular function suggested in this paper yields a relationship between immersion time for 10% probability of survival and water temperature that agrees very well with Molnar's empirical observations based on World War II data. The method presented in this paper circumvents a serious difficulty with most previous models--that one's ability to survive immersion in cold water is determined almost exclusively by the ability to maintain a high level of shivering metabolism.
1981-05-01
functions and the H- function," Boletin do la Academia de Ciencias Fisicas Matematicas v Naturales (Caracas), 31, 95- 102 (1971). 120. Jain, U. C...Society, 37, 329- 334 (1973). 32. Oliver, M. L., and S. L. Kalla, "On the derivative of Fox’s H- function," (Spanish) Acta 14,dlcana de Ciencia -v...34 Universidade de Lisboa Revista de Faculdade de Ciencias FMatematicas, II, Series A, 13, 109-114 (1969-70). 92. Bajpai, S. D., "On some results involving Fox’s H
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Topics in inference and decision-making with partial knowledge
NASA Technical Reports Server (NTRS)
Safavian, S. Rasoul; Landgrebe, David
1990-01-01
Two essential elements needed in the process of inference and decision-making are prior probabilities and likelihood functions. When both of these components are known accurately and precisely, the Bayesian approach provides a consistent and coherent solution to the problems of inference and decision-making. In many situations, however, either one or both of the above components may not be known, or at least may not be known precisely. This problem of partial knowledge about prior probabilities and likelihood functions is addressed. There are at least two ways to cope with this lack of precise knowledge: robust methods, and interval-valued methods. First, ways of modeling imprecision and indeterminacies in prior probabilities and likelihood functions are examined; then how imprecision in the above components carries over to the posterior probabilities is examined. Finally, the problem of decision making with imprecise posterior probabilities and the consequences of such actions are addressed. Application areas where the above problems may occur are in statistical pattern recognition problems, for example, the problem of classification of high-dimensional multispectral remote sensing image data.
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
A discrimination method for the detection of pneumonia using chest radiograph.
Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R
2010-03-01
This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
"Mysteries" of the First and Second Laws of Thermodynamics
ERIC Educational Resources Information Center
Battino, Rubin
2007-01-01
The thermodynamic concepts of First and Second Laws with respect to the entropy function are described using atoms and molecules and probability as manifested in statistical mechanics. The First Law is conceptually understood as [Delta]U = Q + W and the Second Law of Thermodynamics and the entropy function have provided the probability and…
Neural response to reward anticipation under risk is nonlinear in probabilities.
Hsu, Ming; Krajbich, Ian; Zhao, Chen; Camerer, Colin F
2009-02-18
A widely observed phenomenon in decision making under risk is the apparent overweighting of unlikely events and the underweighting of nearly certain events. This violates standard assumptions in expected utility theory, which requires that expected utility be linear (objective) in probabilities. Models such as prospect theory have relaxed this assumption and introduced the notion of a "probability weighting function," which captures the key properties found in experimental data. This study reports functional magnetic resonance imaging (fMRI) data that neural response to expected reward is nonlinear in probabilities. Specifically, we found that activity in the striatum during valuation of monetary gambles are nonlinear in probabilities in the pattern predicted by prospect theory, suggesting that probability distortion is reflected at the level of the reward encoding process. The degree of nonlinearity reflected in individual subjects' decisions is also correlated with striatal activity across subjects. Our results shed light on the neural mechanisms of reward processing, and have implications for future neuroscientific studies of decision making involving extreme tails of the distribution, where probability weighting provides an explanation for commonly observed behavioral anomalies.
21 CFR 1316.10 - Administrative probable cause.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Administrative probable cause. 1316.10 Section 1316.10 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE ADMINISTRATIVE FUNCTIONS, PRACTICES, AND PROCEDURES Administrative Inspections § 1316.10 Administrative probable cause. If the judge...
2012-01-01
Action potentials at the neurons and graded signals at the synapses are primary codes in the brain. In terms of their functional interaction, the studies were focused on the influence of presynaptic spike patterns on synaptic activities. How the synapse dynamics quantitatively regulates the encoding of postsynaptic digital spikes remains unclear. We investigated this question at unitary glutamatergic synapses on cortical GABAergic neurons, especially the quantitative influences of release probability on synapse dynamics and neuronal encoding. Glutamate release probability and synaptic strength are proportionally upregulated by presynaptic sequential spikes. The upregulation of release probability and the efficiency of probability-driven synaptic facilitation are strengthened by elevating presynaptic spike frequency and Ca2+. The upregulation of release probability improves spike capacity and timing precision at postsynaptic neuron. These results suggest that the upregulation of presynaptic glutamate release facilitates a conversion of synaptic analogue signals into digital spikes in postsynaptic neurons, i.e., a functional compatibility between presynaptic and postsynaptic partners. PMID:22852823
The Effects of the Previous Outcome on Probabilistic Choice in Rats
Marshall, Andrew T.; Kirkpatrick, Kimberly
2014-01-01
This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
NASA Astrophysics Data System (ADS)
Kumar, Santosh
2015-11-01
We provide a proof to a recent conjecture by Forrester (2014 J. Phys. A: Math. Theor. 47 065202) regarding the algebraic and arithmetic structure of Meijer G-functions which appear in the expression for probability of all eigenvalues real for the product of two real Gaussian matrices. In the process we come across several interesting identities involving Meijer G-functions.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
On Replacing "Quantum Thinking" with Counterfactual Reasoning
NASA Astrophysics Data System (ADS)
Narens, Louis
The probability theory used in quantum mechanics is currently being employed by psychologists to model the impact of context on decision. Its event space consists of closed subspaces of a Hilbert space, and its probability function sometimes violate the law of the finite additivity of probabilities. Results from the quantum mechanics literature indicate that such a "Hilbert space probability theory" cannot be extended in a useful way to standard, finitely additive, probability theory by the addition of new events with specific probabilities. This chapter presents a new kind of probability theory that shares many fundamental algebraic characteristics with Hilbert space probability theory but does extend to standard probability theory by adjoining new events with specific probabilities. The new probability theory arises from considerations about how psychological experiments are related through counterfactual reasoning.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe
NASA Technical Reports Server (NTRS)
Isaacson, Jeffrey A.; Canizares, Claude R.
1989-01-01
Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Probability distribution functions in turbulent convection
NASA Technical Reports Server (NTRS)
Balachandar, S.; Sirovich, L.
1991-01-01
Results of an extensive investigation of probability distribution functions (pdfs) for Rayleigh-Benard convection, in hard turbulence regime, are presented. It is shown that the pdfs exhibit a high degree of internal universality. In certain cases this universality is established within two Kolmogorov scales of a boundary. A discussion of the factors leading to the universality is presented.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Stochastic optimal operation of reservoirs based on copula functions
NASA Astrophysics Data System (ADS)
Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen
2018-02-01
Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
NASA Astrophysics Data System (ADS)
Shao, Lin; Peng, Luohan
2009-12-01
Although multiple scattering theories have been well developed, numerical calculation is complicated and only tabulated values have been available, which has caused inconvenience in practical use. We have found that a Pearson VII distribution function can be used to fit Lugujjo and Mayer's probability curves in describing the dechanneling phenomenon in backscattering analysis, over a wide range of disorder levels. Differentiation of the obtained function gives another function to calculate angular dispersion of the beam in the frameworks by Sigmund and Winterbon. The present work provides an easy calculation of both dechanneling probability and angular dispersion for any arbitrary combination of beam and target having a reduced thickness ⩾0.6, which can be implemented in modeling of channeling spectra. Furthermore, we used a Monte Carlo simulation program to calculate the deflection probability and compared them with previously tabulated data. A good agreement was reached.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
Niles, Justin K.; Gustave, Jackson; Cohen, Hillel W.; Zeig-Owens, Rachel; Kelly, Kerry J.; Glass, Lara; Prezant, David J.
2011-01-01
Background: We describe the relationship between World Trade Center (WTC) cough syndrome symptoms, pulmonary function, and symptoms consistent with probable posttraumatic stress disorder (PTSD) in WTC-exposed firefighters in the first year post-September 11, 2001 (baseline), and 3 to 4 years later (follow-up). Methods: Five thousand three hundred sixty-three firefighters completed pulmonary function tests (PFTs) and questionnaires at both times. Relationships among WTC cough syndrome, probable PTSD, and PFTs were analyzed using simple and multivariable models. We also examined the effects of cofactors, including WTC exposure. Results: WTC cough syndrome was found in 1,561 firefighters (29.1%) at baseline and 1,186 (22.1%) at follow-up, including 559 with delayed onset (present only at follow-up). Probable PTSD was found in 458 firefighters (8.5%) at baseline and 548 (10.2%) at follow-up, including 343 with delayed onset. Baseline PTSD symptom counts and probable PTSD were associated with WTC cough syndrome at baseline, at follow-up, and in those with delayed-onset WTC cough syndrome. Similarly, WTC cough syndrome symptom counts and WTC cough syndrome at baseline were associated with probable PTSD at baseline, at follow-up, and in those with delayed-onset probable PTSD. WTC arrival time and work duration were cofactors of both outcomes. A small but consistent association existed between pulmonary function and WTC cough syndrome, but none with PTSD. Conclusions: The study showed a moderate association between WTC cough syndrome and probable PTSD. The presence of one contributed to the likelihood of the other, even after adjustment for shared cofactors such as WTC exposure. PMID:21546435
Methods, apparatus and system for notification of predictable memory failure
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-01-03
A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.
High probability neurotransmitter release sites represent an energy efficient design
Lu, Zhongmin; Chouhan, Amit K.; Borycz, Jolanta A.; Lu, Zhiyuan; Rossano, Adam J; Brain, Keith L.; Zhou, You; Meinertzhagen, Ian A.; Macleod, Gregory T.
2016-01-01
Nerve terminals contain multiple sites specialized for the release of neurotransmitters. Release usually occurs with low probability, a design thought to confer many advantages. High probability release sites are not uncommon but their advantages are not well understood. Here we test the hypothesis that high probability release sites represent an energy efficient design. We examined release site probabilities and energy efficiency at the terminals of two glutamatergic motor neurons synapsing on the same muscle fiber in Drosophila larvae. Through electrophysiological and ultrastructural measurements we calculated release site probabilities to differ considerably between terminals (0.33 vs. 0.11). We estimated the energy required to release and recycle glutamate from the same measurements. The energy required to remove calcium and sodium ions subsequent to nerve excitation was estimated through microfluorimetric and morphological measurements. We calculated energy efficiency as the number of glutamate molecules released per ATP molecule hydrolyzed, and high probability release site terminals were found to be more efficient (0.13 vs. 0.06). Our analytical model indicates that energy efficiency is optimal (~0.15) at high release site probabilities (~0.76). As limitations in energy supply constrain neural function, high probability release sites might ameliorate such constraints by demanding less energy. Energy efficiency can be viewed as one aspect of nerve terminal function, in balance with others, because high efficiency terminals depress significantly during episodic bursts of activity. PMID:27593375
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Ronza, A; Vílchez, J A; Casal, J
2007-07-19
Risk assessment of hazardous material spill scenarios, and quantitative risk assessment in particular, make use of event trees to account for the possible outcomes of hazardous releases. Using event trees entails the definition of probabilities of occurrence for events such as spill ignition and blast formation. This study comprises an extensive analysis of ignition and explosion probability data proposed in previous work. Subsequently, the results of the survey of two vast US federal spill databases (HMIRS, by the Department of Transportation, and MINMOD, by the US Coast Guard) are reported and commented on. Some tens of thousands of records of hydrocarbon spills were analysed. The general pattern of statistical ignition and explosion probabilities as a function of the amount and the substance spilled is discussed. Equations are proposed based on statistical data that predict the ignition probability of hydrocarbon spills as a function of the amount and the substance spilled. Explosion probabilities are put forth as well. Two sets of probability data are proposed: it is suggested that figures deduced from HMIRS be used in land transportation risk assessment, and MINMOD results with maritime scenarios assessment. Results are discussed and compared with previous technical literature.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
On the conservative nature of intragenic recombination
Drummond, D. Allan; Silberg, Jonathan J.; Meyer, Michelle M.; Wilke, Claus O.; Arnold, Frances H.
2005-01-01
Intragenic recombination rapidly creates protein sequence diversity compared with random mutation, but little is known about the relative effects of recombination and mutation on protein function. Here, we compare recombination of the distantly related β-lactamases PSE-4 and TEM-1 to mutation of PSE-4. We show that, among β-lactamase variants containing the same number of amino acid substitutions, variants created by recombination retain function with a significantly higher probability than those generated by random mutagenesis. We present a simple model that accurately captures the differing effects of mutation and recombination in real and simulated proteins with only four parameters: (i) the amino acid sequence distance between parents, (ii) the number of substitutions, (iii) the average probability that random substitutions will preserve function, and (iv) the average probability that substitutions generated by recombination will preserve function. Our results expose a fundamental functional enrichment in regions of protein sequence space accessible by recombination and provide a framework for evaluating whether the relative rates of mutation and recombination observed in nature reflect the underlying imbalance in their effects on protein function. PMID:15809422
On the conservative nature of intragenic recombination.
Drummond, D Allan; Silberg, Jonathan J; Meyer, Michelle M; Wilke, Claus O; Arnold, Frances H
2005-04-12
Intragenic recombination rapidly creates protein sequence diversity compared with random mutation, but little is known about the relative effects of recombination and mutation on protein function. Here, we compare recombination of the distantly related beta-lactamases PSE-4 and TEM-1 to mutation of PSE-4. We show that, among beta-lactamase variants containing the same number of amino acid substitutions, variants created by recombination retain function with a significantly higher probability than those generated by random mutagenesis. We present a simple model that accurately captures the differing effects of mutation and recombination in real and simulated proteins with only four parameters: (i) the amino acid sequence distance between parents, (ii) the number of substitutions, (iii) the average probability that random substitutions will preserve function, and (iv) the average probability that substitutions generated by recombination will preserve function. Our results expose a fundamental functional enrichment in regions of protein sequence space accessible by recombination and provide a framework for evaluating whether the relative rates of mutation and recombination observed in nature reflect the underlying imbalance in their effects on protein function.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
A comprehensive parameterization was developed for the heterogeneous reaction probability (γ) of N2O5 as a function of temperature, relative humidity, particle composition, and phase state, for use in advanced air quality models. The reaction probabilities o...
An application of probability to combinatorics: a proof of Vandermonde identity
NASA Astrophysics Data System (ADS)
Paolillo, Bonaventura; Rizzo, Piermichele; Vincenzi, Giovanni
2017-08-01
In this paper, we give possible suggestions for a classroom lesson about an application of probability using basic mathematical notions. We will approach to some combinatoric results without using 'induction', 'polynomial identities' nor 'generating functions', and will give a proof of the 'Vandermonde Identity' using elementary notions of probability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Y.; Pang, N.; Halpin-Healy, T.
1994-12-01
The linear Langevin equation proposed by Edwards and Wilkinson [Proc. R. Soc. London A 381, 17 (1982)] is solved in closed form for noise of arbitrary space and time correlation. Furthermore, the temporal development of the full probability functional describing the height fluctuations is derived exactly, exhibiting an interesting evolution between two distinct Gaussian forms. We determine explicitly the dynamic scaling function for the interfacial width for any given initial condition, isolate the early-time behavior, and discover an invariance that was unsuspected in this problem of arbitrary spatiotemporal noise.
NASA Astrophysics Data System (ADS)
Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Anzalone, A.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Nhung, P. T.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; Pȩkala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Tartare, M.; Taşcău, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.
2011-12-01
In this paper we introduce the concept of Lateral Trigger Probability (LTP) function, i.e., the probability for an Extensive Air Shower (EAS) to trigger an individual detector of a ground based array as a function of distance to the shower axis, taking into account energy, mass and direction of the primary cosmic ray. We apply this concept to the surface array of the Pierre Auger Observatory consisting of a 1.5 km spaced grid of about 1600 water Cherenkov stations. Using Monte Carlo simulations of ultra-high energy showers the LTP functions are derived for energies in the range between 1017 and 1019 eV and zenith angles up to 65°. A parametrization combining a step function with an exponential is found to reproduce them very well in the considered range of energies and zenith angles. The LTP functions can also be obtained from data using events simultaneously observed by the fluorescence and the surface detector of the Pierre Auger Observatory (hybrid events). We validate the Monte Carlo results showing how LTP functions from data are in good agreement with simulations.
Probability of satellite collision
NASA Technical Reports Server (NTRS)
Mccarter, J. W.
1972-01-01
A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.
Barch, Deanna M; Treadway, Michael T; Schoen, Nathan
2014-05-01
One of the most debilitating aspects of schizophrenia is an apparent interest in or ability to exert effort for rewards. Such "negative symptoms" may prevent individuals from obtaining potentially beneficial outcomes in educational, occupational, or social domains. In animal models, dopamine abnormalities decrease willingness to work for rewards, implicating dopamine (DA) function as a candidate substrate for negative symptoms given that schizophrenia involves dysregulation of the dopamine system. We used the effort-expenditure for rewards task (EEfRT) to assess the degree to which individuals with schizophrenia were wiling to exert increased effort for either larger magnitude rewards or for rewards that were more probable. Fifty-nine individuals with schizophrenia and 39 demographically similar controls performed the EEfRT task, which involves making choices between "easy" and "hard" tasks to earn potential rewards. Individuals with schizophrenia showed less of an increase in effort allocation as either reward magnitude or probability increased. In controls, the frequency of choosing the hard task in high reward magnitude and probability conditions was negatively correlated with depression severity and anhedonia. In schizophrenia, fewer hard task choices were associated with more severe negative symptoms and worse community and work function as assessed by a caretaker. Consistent with patterns of disrupted dopamine functioning observed in animal models of schizophrenia, these results suggest that 1 mechanism contributing to impaired function and motivational drive in schizophrenia may be a reduced allocation of greater effort for higher magnitude or higher probability rewards.
Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards
Ackermann, John F.; Landy, Michael S.
2014-01-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822
Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.
Ackermann, John F; Landy, Michael S
2015-02-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.
ERIC Educational Resources Information Center
Arntzen, Erik
2006-01-01
The present series of 4 experiments investigated the probability of responding in accord with equivalence in adult human participants as a function of increasing or decreasing delays in a many-to-one (MTO) or comparison-as-node and one-to-many (OTM) or sample-as-node conditional discrimination procedure. In Experiment 1, 12 participants started…
Stability Criteria Analysis for Landing Craft Utility
2017-12-01
Square meter m/s Meters per Second m/s2 Meters per Second Squared n Vertical Displacement of Sea Water Free Surface n3 Ship’s Heave... Displacement n5 Ship’s Pitch Angle p(ξ) Rayleigh Distribution Probability Function POSSE Program of Ship Salvage Engineering pk...Spectrum Constant γ JONSWAP Wave Spectrum Peak Factor Γ(λ) Gamma Probability Function Δ Ship’s Displacement Δω Small Frequency
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
Probability mapping of scarred myocardium using texture and intensity features in CMR images
2013-01-01
Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Computing exact bundle compliance control charts via probability generating functions.
Chen, Binchao; Matis, Timothy; Benneyan, James
2016-06-01
Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.
Probabilistic #D data fusion for multiresolution surface generation
NASA Technical Reports Server (NTRS)
Manduchi, R.; Johnson, A. E.
2002-01-01
In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
Economic decision-making compared with an equivalent motor task.
Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T
2009-04-14
There is considerable evidence that human economic decision-making deviates from the predictions of expected utility theory (EUT) and that human performance conforms to EUT in many perceptual and motor decision tasks. It is possible that these results reflect a real difference in decision-making in the 2 domains but it is also possible that the observed discrepancy simply reflects typical differences in experimental design. We developed a motor task that is mathematically equivalent to choosing between lotteries and used it to compare how the same subject chose between classical economic lotteries and the same lotteries presented in equivalent motor form. In experiment 1, we found that subjects are more risk seeking in deciding between motor lotteries. In experiment 2, we used cumulative prospect theory to model choice and separately estimated the probability weighting functions and the value functions for each subject carrying out each task. We found no patterned differences in how subjects represented outcome value in the motor and the classical tasks. However, the probability weighting functions for motor and classical tasks were markedly and significantly different. Those for the classical task showed a typical tendency to overweight small probabilities and underweight large probabilities, and those for the motor task showed the opposite pattern of probability distortion. This outcome also accounts for the increased risk-seeking observed in the motor tasks of experiment 1. We conclude that the same subject distorts probability, but not value, differently in making identical decisions in motor and classical form.
Heyman, Gene M.; Grisanzio, Katherine A.; Liang, Victor
2016-01-01
We tested whether principles that describe the allocation of overt behavior, as in choice experiments, also describe the allocation of cognition, as in attention experiments. Our procedure is a cognitive version of the “two-armed bandit choice procedure.” The two-armed bandit procedure has been of interest to psychologistsand economists because it tends to support patterns of responding that are suboptimal. Each of two alternatives provides rewards according to fixed probabilities. The optimal solution is to choose the alternative with the higher probability of reward on each trial. However, subjects often allocate responses so that the probability of a response approximates its probability of reward. Although it is this result which has attracted most interest, probability matching is not always observed. As a function of monetary incentives, practice, and individual differences, subjects tend to deviate from probability matching toward exclusive preference, as predicted by maximizing. In our version of the two-armed bandit procedure, the monitor briefly displayed two, small adjacent stimuli that predicted correct responses according to fixed probabilities, as in a two-armed bandit procedure. We show that in this setting, a simple linear equation describes the relationship between attention and correct responses, and that the equation’s solution is the allocation of attention between the two stimuli. The calculations showed that attention allocation varied as a function of the degree to which the stimuli predicted correct responses. Linear regression revealed a strong correlation (r = 0.99) between the predictiveness of a stimulus and the probability of attending to it. Nevertheless there were deviations from probability matching, and although small, they were systematic and statistically significant. As in choice studies, attention allocation deviated toward maximizing as a function of practice, feedback, and incentives. Our approach also predicts the frequency of correct guesses and the relationship between attention allocation and response latencies. The results were consistent with these two predictions, the assumptions of the equations used to calculate attention allocation, and recent studies which show that predictiveness and reward are important determinants of attention. PMID:27014109
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Garway-Heath, David F
2008-01-01
This chapter reviews the evidence for the clinical application of vision function tests and imaging devices to identify early glaucoma, and sets out a scheme for the appropriate use and interpretation of test results in screening/case-finding and clinic settings. In early glaucoma, signs may be equivocal and the diagnosis is often uncertain. Either structural damage or vision function loss may be the first sign of glaucoma; neither one is consistently apparent before the other. Quantitative tests of visual function and measurements of optic-nerve head and retinal nerve fiber layer anatomy are useful to either raise or lower the probability that glaucoma is present. The posttest probability for glaucoma may be calculated from the pretest probability and the likelihood ratio of the diagnostic criterion, and the output of several diagnostic devices may be combined to achieve a final probability. However, clinicians need to understand how these diagnostic devices make their measurements, so that the validity of each test result can be adequately assessed. Only then should the result be used, together with the patient history and clinical examination, to derive a diagnosis.
Money, kisses, and electric shocks: on the affective psychology of risk.
Rottenstreich, Y; Hsee, C K
2001-05-01
Prospect theory's S-shaped weighting function is often said to reflect the psychophysics of chance. We propose an affective rather than psychophysical deconstruction of the weighting function resting on two assumptions. First, preferences depend on the affective reactions associated with potential outcomes of a risky choice. Second, even with monetary values controlled, some outcomes are relatively affect-rich and others relatively affect-poor. Although the psychophysical and affective approaches are complementary, the affective approach has one novel implication: Weighting functions will be more S-shaped for lotteries involving affect-rich than affect-poor outcomes. That is, people will be more sensitive to departures from impossibility and certainty but less sensitive to intermediate probability variations for affect-rich outcomes. We corroborated this prediction by observing probability-outcome interactions: An affect-poor prize was preferred over an affect-rich prize under certainty, but the direction of preference reversed under low probability. We suggest that the assumption of probability-outcome independence, adopted by both expected-utility and prospect theory, may hold across outcomes of different monetary values, but not different affective values.
Reinforcement Probability Modulates Temporal Memory Selection and Integration Processes
Matell, Matthew S.; Kurti, Allison N.
2013-01-01
We have previously shown that rats trained in a mixed-interval peak procedure (tone = 4s, light = 12s) respond in a scalar manner at a time in between the trained peak times when presented with the stimulus compound (Swanton & Matell, 2011). In our previous work, the two component cues were reinforced with different probabilities (short = 20%, long = 80%) to equate response rates, and we found that the compound peak time was biased toward the cue with the higher reinforcement probability. Here, we examined the influence that different reinforcement probabilities have on the temporal location and shape of the compound response function. We found that the time of peak responding shifted as a function of the relative reinforcement probability of the component cues, becoming earlier as the relative likelihood of reinforcement associated with the short cue increased. However, as the relative probabilities of the component cues grew dissimilar, the compound peak became non-scalar, suggesting that the temporal control of behavior shifted from a process of integration to one of selection. As our previous work has utilized durations and reinforcement probabilities more discrepant than those used here, these data suggest that the processes underlying the integration/selection decision for time are based on cue value. PMID:23896560
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
Strong lensing probability in TeVeS (tensor-vector-scalar) theory
NASA Astrophysics Data System (ADS)
Chen, Da-Ming
2008-01-01
We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with Ωb = 0.04 and ΩΛ = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We compare our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function μ(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.
The risks and returns of stock investment in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-03-01
The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
The Difference Calculus and The NEgative Binomial Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o; Shenton, LR
2007-01-01
In a previous paper we state the dominant term in the third central moment of the maximum likelihood estimator k of the parameter k in the negative binomial probability function where the probability generating function is (p + 1 - pt){sup -k}. A partial sum of the series {Sigma}1/(k + x){sup 3} is involved, where x is a negative binomial random variate. In expectation this sum can only be found numerically using the computer. Here we give a simple definite integral in (0,1) for the generalized case. This means that now we do have a valid expression for {radical}{beta}{sub 11}(k)more » and {radical}{beta}{sub 11}(p). In addition we use the finite difference operator {Delta}, and E = 1 + {Delta} to set up formulas for low order moments. Other examples of the operators are quoted relating to the orthogonal set of polynomials associated with the negative binomial probability function used as a weight function.« less
A Seakeeping Performance and Affordability Tradeoff Study for the Coast Guard Offshore Patrol Cutter
2016-06-01
Index Polar Plot for Sea State 4, All Headings Are Relative to the Wave Motion and Velocity is Given in Meters per Second...40 Figure 15. Probability and Cumulative Density Functions of Annual Sea State Occurrences in the Open Ocean, North Pacific...criteria at a given sea state. Probability distribution functions are available that describe the likelihood that an operational area will experience
Liu, Tianhui; Fu, Bina; Zhang, Dong H
2017-04-28
The dissociative chemisorption of HCl on the Au(111) surface has recently been an interesting and important subject, regarding the discrepancy between the theoretical dissociation probabilities and the experimental sticking probabilities. We here constructed an accurate full-dimensional (six-dimensional (6D)) potential energy surface (PES) based on the density functional theory (DFT) with the revised Perdew-Burke-Ernzerhof (RPBE) functional, and performed 6D quantum mechanical (QM) calculations for HCl dissociating on a rigid Au(111) surface. The effects of vibrational excitations, rotational orientations, and site-averaging approximation on the present RPBE PES are investigated. Due to the much higher barrier height obtained on the RPBE PES than on the PW91 PES, the agreement between the present theoretical and experimental results is greatly improved. In particular, at the very low kinetic energy, the QM-RPBE dissociation probability agrees well with the experimental data. However, the computed QM-RPBE reaction probabilities are still markedly different from the experimental values at most of the energy regions. In addition, the QM-RPBE results achieve good agreement with the recent ab initio molecular dynamics calculations based on the RPBE functional at high kinetic energies.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Influence of the random walk finite step on the first-passage probability
NASA Astrophysics Data System (ADS)
Klimenkova, Olga; Menshutin, Anton; Shchur, Lev
2018-01-01
A well known connection between first-passage probability of random walk and distribution of electrical potential described by Laplace equation is studied. We simulate random walk in the plane numerically as a discrete time process with fixed step length. We measure first-passage probability to touch the absorbing sphere of radius R in 2D. We found a regular deviation of the first-passage probability from the exact function, which we attribute to the finiteness of the random walk step.
ERIC Educational Resources Information Center
Varga, Julia
2006-01-01
This paper analyses students' application strategies to higher education, the effects of labour market expectations and admission probabilities. The starting hypothesis of this study is that students consider the expected utility of their choices, a function of expected net lifetime earnings and the probability of admission. Based on a survey…
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Probability mass first flush evaluation for combined sewer discharges.
Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong
2010-01-01
The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.
Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A
2010-01-01
Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825
The paradoxical effect of low reward probabilities in suboptimal choice.
Fortes, Inês; Pinto, Carlos; Machado, Armando; Vasconcelos, Marco
2018-04-01
When offered a choice between 2 alternatives, animals sometimes prefer the option yielding less food. For instance, pigeons and starlings prefer an option that on 20% of the trials presents a stimulus always followed by food, and on the remaining 80% of the trials presents a stimulus never followed by food (the Informative Option), over an option that provides food on 50% of the trials regardless of the stimulus presented (the Noninformative Option). To explain this suboptimal behavior, it has been hypothesized that animals ignore (or do not engage with) the stimulus that is never followed by food in the Informative Option. To assess when pigeons attend to the stimulus usually not followed by food, we increased the probability of reinforcement, p, in the presence of that stimulus. Across 2 experiments, we found that the value of the Informative Option decreased with p. To account for the results, we added to the Reinforcement Rate Model (and also to the Hyperbolic Discounting Model) an engagement function, f(p), that specified the likelihood the animal attends to a stimulus followed by reward with probability p, and then derived the model predictions for 2 forms of f(p), a linear function, and an all-or-none threshold function. Both models predicted the observed findings with a linear engagement function: The higher the probability of reinforcement after a stimulus, the higher the probability of engaging the stimulus, and, surprisingly, the less the value of the option comprising the stimulus. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
An extended car-following model considering random safety distance with different probabilities
NASA Astrophysics Data System (ADS)
Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi
2018-02-01
Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.
Theory of Stochastic Laplacian Growth
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2017-07-01
We generalize the diffusion-limited aggregation by issuing many randomly-walking particles, which stick to a cluster at the discrete time unit providing its growth. Using simple combinatorial arguments we determine probabilities of different growth scenarios and prove that the most probable evolution is governed by the deterministic Laplacian growth equation. A potential-theoretical analysis of the growth probabilities reveals connections with the tau-function of the integrable dispersionless limit of the two-dimensional Toda hierarchy, normal matrix ensembles, and the two-dimensional Dyson gas confined in a non-uniform magnetic field. We introduce the time-dependent Hamiltonian, which generates transitions between different classes of equivalence of closed curves, and prove the Hamiltonian structure of the interface dynamics. Finally, we propose a relation between probabilities of growth scenarios and the semi-classical limit of certain correlation functions of "light" exponential operators in the Liouville conformal field theory on a pseudosphere.
NASA Astrophysics Data System (ADS)
Nioutsikou, Elena; Partridge, Mike; Bedford, James L.; Webb, Steve
2005-03-01
The aim of this study has been to explicitly include the functional heterogeneity of an organ as a factor that contributes to the probability of complication of normal tissues following radiotherapy. Situations for which the inclusion of this information can be advantageous to the design of treatment plans are then investigated. A Java program has been implemented for this purpose. This makes use of a voxelated model of a patient, which is based on registered anatomical and functional data in order to enable functional voxel weighting. Using this model, the functional dose-volume histogram (fDVH) and the functional normal tissue complication probability (fNTCP) are then introduced as extensions to the conventional dose-volume histogram (DVH) and normal tissue complication probability (NTCP). In the presence of functional heterogeneity, these tools are physically more meaningful for plan evaluation than the traditional indices, as they incorporate additional information and are anticipated to show a better correlation with outcome. New parameters mf, nf and TD50f are required to replace the m, n and TD50 parameters. A range of plausible values was investigated, awaiting fitting of these new parameters to patient outcomes where functional data have been measured. As an example, the model is applied to two lung datasets utilizing accurately registered computed tomography (CT) and single photon emission computed tomography (SPECT) perfusion scans. Assuming a linear perfusion-function relationship, the biological index mean perfusion weighted lung dose (MPWLD) has been extracted from integration over outlined regions of interest. In agreement with the MPWLD ranking, the fNTCP predictions reveal that incorporation of functional imaging in radiotherapy treatment planning is most beneficial for organs with a large volume effect and large focal areas of dysfunction. There is, however, no additional advantage in cases presenting with homogeneous function. Although presented for lung radiotherapy, this model is general. It can also be applied to positron emission tomography (PET)-CT or functional magnetic resonance imaging (fMRI)-CT registered data and extended to the functional description of tumour control probability.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Tian, F.; Lu, Y.
2017-12-01
Based on socioeconomic and hydrological data in three arid inland basins and error analysis, the dynamics of human water consumption (HWC) are analyzed to be asymmetric, i.e., HWC increase rapidly in wet periods while maintain or decrease slightly in dry periods. Besides the qualitative analysis that in wet periods great water availability inspires HWC to grow fast but the now expanded economy is managed to sustain by over-exploitation in dry periods, two quantitative models are established and tested, based on expected utility theory (EUT) and prospect theory (PT) respectively. EUT states that humans make decisions based on the total expected utility, namely the sum of utility function multiplied by probability of each result, while PT states that the utility function is defined over gains and losses separately, and probability should be replaced by probability weighting function.
Electrical cable utilization for wave energy converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Diana; Baca, Michael; Schenkman, Benjamin
Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less
Electrical cable utilization for wave energy converters
Bull, Diana; Baca, Michael; Schenkman, Benjamin
2018-04-27
Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Method and device for landing aircraft dependent on runway occupancy time
NASA Technical Reports Server (NTRS)
Ghalebsaz Jeddi, Babak (Inventor)
2012-01-01
A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrison, Glenn Charles
1999-12-01
In this dissertation, results are presented of laboratory investigations and mathematical modeling efforts designed to better understand the interactions of ozone with surfaces. In the laboratory, carpet and duct materials were exposed to ozone and measured ozone uptake kinetics and the ozone induced emissions of volatile organic compounds. To understand the results of the experiments, mathematical methods were developed to describe dynamic indoor aldehyde concentrations, mass transport of reactive species to smooth surfaces, the equivalent reaction probability of whole carpet due to the surface reactivity of fibers and carpet backing, and ozone aging of surfaces. Carpets, separated carpet fibers, andmore » separated carpet backing all tended to release aldehydes when exposed to ozone. Secondary emissions were mostly n-nonanal and several other smaller aldehydes. The pattern of emissions suggested that vegetable oils may be precursors for these oxidized emissions. Several possible precursors and experiments in which linseed and tung oils were tested for their secondary emission potential were discussed. Dynamic emission rates of 2-nonenal from a residential carpet may indicate that intermediate species in the oxidation of conjugated olefins can significantly delay aldehyde emissions and act as reservoir for these compounds. The ozone induced emission rate of 2-nonenal, a very odorous compound, can result in odorous indoor concentrations for several years. Surface ozone reactivity is a key parameter in determining the flux of ozone to a surface, is parameterized by the reaction probability, which is simply the probability that an ozone molecule will be irreversibly consumed when it strikes a surface. In laboratory studies of two residential and two commercial carpets, the ozone reaction probability for carpet fibers, carpet backing and the equivalent reaction probability for whole carpet were determined. Typically reaction probability values for these materials were 10 -7, 10 -5, and 10 -5 respectively. To understand how internal surface area influences the equivalent reaction probability of whole carpet, a model of ozone diffusion into and reaction with internal carpet components was developed. This was then used to predict apparent reaction probabilities for carpet. He combines this with a modified model of turbulent mass transfer developed by Liu, et al. to predict deposition rates and indoor ozone concentrations. The model predicts that carpet should have an equivalent reaction probability of about 10 -5, matching laboratory measurements of the reaction probability. For both carpet and duct materials, surfaces become progressively quenched (aging), losing the ability to react or otherwise take up ozone. He evaluated the functional form of aging and find that the reaction probability follows a power function with respect to the cumulative uptake of ozone. To understand ozone aging of surfaces, he developed several mathematical descriptions of aging based on two different mechanisms. The observed functional form of aging is mimicked by a model which describes ozone diffusion with internal reaction in a solid. He shows that the fleecy nature of carpet materials in combination with the model of ozone diffusion below a fiber surface and internal reaction may explain the functional form and the magnitude of power function parameters observed due to ozone interactions with carpet. The ozone induced aldehyde emissions, measured from duct materials, were combined with an indoor air quality model to show that concentrations of aldehydes indoors may approach odorous levels. He shows that ducts are unlikely to be a significant sink for ozone due to the low reaction probability in combination with the short residence time of air in ducts.« less
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.
1991-01-01
The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
NASA Technical Reports Server (NTRS)
Bollenbacher, Gary; Guptill, James D.
1999-01-01
This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
Alternative probability theories for cognitive psychology.
Narens, Louis
2014-01-01
Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Shih-Jung
Dynamic strength of the High Flux Isotope Reactor (HFIR) vessel to resist hypothetical accidents is analyzed by using the method of fracture mechanics. Vessel critical stresses are estimated by applying dynamic pressure pulses of a range of magnitudes and pulse-durations. The pulses versus time functions are assumed to be step functions. The probability of vessel fracture is then calculated by assuming a distribution of possible surface cracks of different crack depths. The probability distribution function for the crack depths is based on the form that is recommended by the Marshall report. The toughness of the vessel steel used in themore » analysis is based on the projected and embrittled value after 10 effective full power years from 1986. From the study made by Cheverton, Merkle and Nanstad, the weakest point on the vessel for fracture evaluation is known to be located within the region surrounding the tangential beam tube HB3. The increase in the probability of fracture is obtained as an extension of the result from that report for the regular operating condition to include conditions of higher dynamic pressures due to accident loadings. The increase in the probability of vessel fracture is plotted for a range of hoop stresses to indicate the vessel strength against hypothetical accident conditions.« less
Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor
2015-09-21
The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.
Evidence of scaling of void probability in nucleus-nucleus interactions at few GeV energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Dipak; Biswas, Biswanath; Deb, Argha
1997-11-01
The rapidity gap probability in the {sup 24}Mg-AgBr interaction at 4.5GeV/c/nucleon has been studied in detail. The data reveal scaling behavior of the void probability in the central rapidity domain which confirms the validity of the linked-pair approximation for the N-particle cumulant correlation functions. This scaling behavior appears to be similar to the void probability in the Perseus-Pisces supercluster region of galaxies. {copyright} {ital 1997} {ital The American Physical Society}
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection
NASA Astrophysics Data System (ADS)
Denuit, Michel; Dhaene, Jan
2007-06-01
In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.
Multi-Scale/Multi-Functional Probabilistic Composite Fatigue
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2008-01-01
A multi-level (multi-scale/multi-functional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
E-O Sensor Signal Recognition Simulation: Computer Code SPOT I.
1978-10-01
scattering phase function PDCO , defined at the specified wavelength, given for each of the scattering angles defined. Currently, a maximum of sixty-four...PHASE MATRIX DATA IS DEFINED PDCO AVERAGE PROBABILITY FOR PHASE MATRIX DEFINITION NPROB PROBLEM NUMBER 54 Fig. 12. FLOWCHART for the SPOT Computer Code...El0.1 WLAM(N) Wavelength at which the aerosol single-scattering phase function set is defined (microns) 3 8El0.1 PDCO (N,I) Average probability for
DISCOUNTING OF DELAYED AND PROBABILISTIC LOSSES OVER A WIDE RANGE OF AMOUNTS
Green, Leonard; Myerson, Joel; Oliveira, Luís; Chang, Seo Eun
2014-01-01
The present study examined delay and probability discounting of hypothetical monetary losses over a wide range of amounts (from $20 to $500,000) in order to determine how amount affects the parameters of the hyperboloid discounting function. In separate conditions, college students chose between immediate payments and larger, delayed payments and between certain payments and larger, probabilistic payments. The hyperboloid function accurately described both types of discounting, and amount of loss had little or no systematic effect on the degree of discounting. Importantly, the amount of loss also had little systematic effect on either the rate parameter or the exponent of the delay and probability discounting functions. The finding that the parameters of the hyperboloid function remain relatively constant across a wide range of amounts of delayed and probabilistic loss stands in contrast to the robust amount effects observed with delayed and probabilistic rewards. At the individual level, the degree to which delayed losses were discounted was uncorrelated with the degree to which probabilistic losses were discounted, and delay and probability loaded on two separate factors, similar to what is observed with delayed and probabilistic rewards. Taken together, these findings argue that although delay and probability discounting involve fundamentally different decision-making mechanisms, nevertheless the discounting of delayed and probabilistic losses share an insensitivity to amount that distinguishes it from the discounting of delayed and probabilistic gains. PMID:24745086
Strong lensing probability in TeVeS (tensor-vector-scalar) theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Daming, E-mail: cdm@bao.ac.cn
2008-01-15
We recalculate the strong lensing probability as a function of the image separation in TeVeS (tensor-vector-scalar) cosmology, which is a relativistic version of MOND (MOdified Newtonian Dynamics). The lens is modeled by the Hernquist profile. We assume an open cosmology with {Omega}{sub b} = 0.04 and {Omega}{sub {Lambda}} = 0.5 and three different kinds of interpolating functions. Two different galaxy stellar mass functions (GSMF) are adopted: PHJ (Panter, Heavens and Jimenez 2004 Mon. Not. R. Astron. Soc. 355 764) determined from SDSS data release 1 and Fontana (Fontana et al 2006 Astron. Astrophys. 459 745) from GOODS-MUSIC catalog. We comparemore » our results with both the predicted probabilities for lenses from singular isothermal sphere galaxy halos in LCDM (Lambda cold dark matter) with a Schechter-fit velocity function, and the observational results for the well defined combined sample of the Cosmic Lens All-Sky Survey (CLASS) and Jodrell Bank/Very Large Array Astrometric Survey (JVAS). It turns out that the interpolating function {mu}(x) = x/(1+x) combined with Fontana GSMF matches the results from CLASS/JVAS quite well.« less
Structural Features of Algebraic Quantum Notations
ERIC Educational Resources Information Center
Gire, Elizabeth; Price, Edward
2015-01-01
The formalism of quantum mechanics includes a rich collection of representations for describing quantum systems, including functions, graphs, matrices, histograms of probabilities, and Dirac notation. The varied features of these representations affect how computations are performed. For example, identifying probabilities of measurement outcomes…
Symmetric polynomials in information theory: Entropy and subentropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozsa, Richard; Mitchison, Graeme
2015-06-15
Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less
Local regularity for time-dependent tug-of-war games with varying probabilities
NASA Astrophysics Data System (ADS)
Parviainen, Mikko; Ruosteenoja, Eero
2016-07-01
We study local regularity properties of value functions of time-dependent tug-of-war games. For games with constant probabilities we get local Lipschitz continuity. For more general games with probabilities depending on space and time we obtain Hölder and Harnack estimates. The games have a connection to the normalized p (x , t)-parabolic equation ut = Δu + (p (x , t) - 2) Δ∞N u.
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner
2009-06-01
Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.
Spencer, Amy V; Cox, Angela; Lin, Wei-Yu; Easton, Douglas F; Michailidou, Kyriaki; Walters, Kevin
2016-04-01
There is a large amount of functional genetic data available, which can be used to inform fine-mapping association studies (in diseases with well-characterised disease pathways). Single nucleotide polymorphism (SNP) prioritization via Bayes factors is attractive because prior information can inform the effect size or the prior probability of causal association. This approach requires the specification of the effect size. If the information needed to estimate a priori the probability density for the effect sizes for causal SNPs in a genomic region isn't consistent or isn't available, then specifying a prior variance for the effect sizes is challenging. We propose both an empirical method to estimate this prior variance, and a coherent approach to using SNP-level functional data, to inform the prior probability of causal association. Through simulation we show that when ranking SNPs by our empirical Bayes factor in a fine-mapping study, the causal SNP rank is generally as high or higher than the rank using Bayes factors with other plausible values of the prior variance. Importantly, we also show that assigning SNP-specific prior probabilities of association based on expert prior functional knowledge of the disease mechanism can lead to improved causal SNPs ranks compared to ranking with identical prior probabilities of association. We demonstrate the use of our methods by applying the methods to the fine mapping of the CASP8 region of chromosome 2 using genotype data from the Collaborative Oncological Gene-Environment Study (COGS) Consortium. The data we analysed included approximately 46,000 breast cancer case and 43,000 healthy control samples. © 2016 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Computer Simulation Results for the Two-Point Probability Function of Composite Media
NASA Astrophysics Data System (ADS)
Smith, P.; Torquato, S.
1988-05-01
Computer simulation results are reported for the two-point matrix probability function S2 of two-phase random media composed of disks distributed with an arbitrary degree of impenetrability λ. The novel technique employed to sample S2( r) (which gives the probability of finding the endpoints of a line segment of length r in the matrix) is very accurate and has a fast execution time. Results for the limiting cases λ = 0 (fully penetrable disks) and λ = 1 (hard disks), respectively, compare very favorably with theoretical predictions made by Torquato and Beasley and by Torquato and Lado. Results are also reported for several values of λ. that lie between these two extremes: cases which heretofore have not been examined.
NASA Technical Reports Server (NTRS)
Guberman, S.; Dalgarno, A.; Posen, A.; Kwok, T. L.
1986-01-01
Multiconfiguration variational calculations of the electronic wave functions of the a 3Sigma(+)g and b 3Sigma(+)u states of molecular hydrogen are presented, and the electric dipole transition moment between them (of interest in connection with stellar atmospheres and the UV spectrum of the Jovian planets) is obtained. The dipole moment is used to calculate the probabilities of radiative transitions from the discrete vibrational levels of the a 3Sigma(+)g state to the vibrational continuum of the repulsive b 3Sigma(+)u state as functions of the wavelength of the emitted photons. The total transition probabilities and radiative lifetimes of the levels v prime = 0-20 are presented.
Li, Wen-Chang; Cooke, Tom; Sautois, Bart; Soffe, Stephen R; Borisyuk, Roman; Roberts, Alan
2007-09-10
How specific are the synaptic connections formed as neuronal networks develop and can simple rules account for the formation of functioning circuits? These questions are assessed in the spinal circuits controlling swimming in hatchling frog tadpoles. This is possible because detailed information is now available on the identity and synaptic connections of the main types of neuron. The probabilities of synapses between 7 types of identified spinal neuron were measured directly by making electrical recordings from 500 pairs of neurons. For the same neuron types, the dorso-ventral distributions of axons and dendrites were measured and then used to calculate the probabilities that axons would encounter particular dendrites and so potentially form synaptic connections. Surprisingly, synapses were found between all types of neuron but contact probabilities could be predicted simply by the anatomical overlap of their axons and dendrites. These results suggested that synapse formation may not require axons to recognise specific, correct dendrites. To test the plausibility of simpler hypotheses, we first made computational models that were able to generate longitudinal axon growth paths and reproduce the axon distribution patterns and synaptic contact probabilities found in the spinal cord. To test if probabilistic rules could produce functioning spinal networks, we then made realistic computational models of spinal cord neurons, giving them established cell-specific properties and connecting them into networks using the contact probabilities we had determined. A majority of these networks produced robust swimming activity. Simple factors such as morphogen gradients controlling dorso-ventral soma, dendrite and axon positions may sufficiently constrain the synaptic connections made between different types of neuron as the spinal cord first develops and allow functional networks to form. Our analysis implies that detailed cellular recognition between spinal neuron types may not be necessary for the reliable formation of functional networks to generate early behaviour like swimming.
Serrano-Alarcón, Manuel; Perelman, Julian
2017-10-03
In a context of population ageing, it is a priority for planning and prevention to understand the socioeconomic (SE) patterning of functional limitations and its consequences on healthcare needs. This paper aims at measuring the gender and SE inequalities in functional limitations and their age of onset among the Southern European elderly; then, we evaluate how functional status is linked to formal and informal care use. We used Portuguese, Italian and Spanish data from the Survey of Health, Ageing and Retirement in Europe (SHARE) of 2011 (n = 9233). We constructed a summary functional limitation score as the sum of two variables: i) Activities of Daily Living (ADL) and ii) Instrumental Activities of Daily Living (IADL). We modelled the functional limitation as a function of age, gender, education, subjective poverty, employment and marital status using multinomial logit models. We then estimated how functional limitation affected informal and formal care demand using negative binomial and logistic models. Women were 2.3 percentage points (pp) more likely to experience severe functional limitation than men, and overcame a 10% probability threshold of suffering from severe limitation around 5 years earlier. Subjective poverty was associated with a 3.1 pp. higher probability of severe functional limitation. Having a university degree reduced the probability of severe functional limitation by 3.5 pp. as compared to none educational level. Discrepancies were wider for the oldest old: women aged 65-79 years old were 3.3 pp. more likely to suffer severe limitations, the excess risk increasing to 15.5 pp. among those older than 80. Similarly, educational inequalities in functional limitation were wider at older ages. Being severely limited was related with a 32.1 pp. higher probability of receiving any informal care, as compared to those moderately limited. Finally, those severely limited had on average 3.2 hospitalization days and 4.6 doctor consultations more, per year, than those without limitations. Functional limitations are unequally distributed, hitting women and the worse-off earlier and more severely, with consequences on care needs. Considering the burden on healthcare systems and families, public health policies should seek to reduce current inequalities in functional limitations.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
Piecewise Geometric Estimation of a Survival Function.
1985-04-01
Langberg (1982). One of the by- products of the estimation process is an estimate of the failure rate function: here, another issue is raised. It is evident...envisaged as the infinite product probability space that may be constructed in the usual way from the sequence of probability spaces corresponding to the...received 6 MP (a mercaptopurine used in the treatment of leukemia). The ordered remis- sion times in weeks are: 6, 6, 6, 6+, 7, 9+, 10, 10+, 11+, 13, 16
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.
1980-01-01
Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
A probabilistic approach to photovoltaic generator performance prediction
NASA Astrophysics Data System (ADS)
Khallat, M. A.; Rahman, S.
1986-09-01
A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.
Application of a New Resource-Constrained Triage Method to Military-Age Victims
2009-12-01
evidence based, does not consider resources, and has been shown to be scientifically and opera- tionally flawed.’ General P . K. Carlton, former USAF Surgeon...metric that can be used to predict sur- vival probability) P ^ (t) = the survival probability of victims with original SCORE s treated in time period t. n...function coefficients were derived on the design set, and validated on the test set. The logistic function has the form: P ^ - 1/(1 + e"), where P ^ is
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
A collision probability analysis of the double-heterogeneity problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hebert, A.
1993-10-01
A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, O.; Roa, Luis; Delgado, A.
We study the probabilistic cloning of equidistant states. These states are such that the inner product between them is a complex constant or its conjugate. Thereby, it is possible to study their cloning in a simple way. In particular, we are interested in the behavior of the cloning probability as a function of the phase of the overlap among the involved states. We show that for certain families of equidistant states Duan and Guo's cloning machine leads to cloning probabilities lower than the optimal unambiguous discrimination probability of equidistant states. We propose an alternative cloning machine whose cloning probability ismore » higher than or equal to the optimal unambiguous discrimination probability for any family of equidistant states. Both machines achieve the same probability for equidistant states whose inner product is a positive real number.« less
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Espeland, Mark A; Newman, Anne B; Sink, Kaycee; Gill, Thomas M; King, Abby C; Miller, Michael E; Guralnik, Jack; Katula, Jeff; Church, Timothy; Manini, Todd; Reid, Kieran F; McDermott, Mary M
2015-08-01
The objective of this study was to evaluate cross-sectional and longitudinal associations between ankle-brachial index (ABI) and indicators of cognitive function. Randomized clinical trial (Lifestyle Interventions and Independence for Elders Trial). Eight US academic centers. A total of 1601 adults ages 70-89 years, sedentary, without dementia, and with functional limitations. Baseline ABI and interviewer- and computer-administered cognitive function assessments were obtained. These assessments were used to compare a physical activity intervention with a health education control. Cognitive function was reassessed 24 months later (interviewer-administered) and 18 or 30 months later (computer-administered) and central adjudication was used to classify individuals as having mild cognitive impairment, probable dementia, or neither. Lower ABI had a modest independent association with poorer cognitive functioning at baseline (partial r = 0.09; P < .001). Although lower baseline ABI was not associated with overall changes in cognitive function test scores, it was associated with higher odds for 2-year progression to a composite of either mild cognitive impairment or probable dementia (odds ratio 2.60 per unit lower ABI; 95% confidence interval 1.06-6.37). Across 2 years, changes in ABI were not associated with changes in cognitive function. In an older cohort sedentary individuals with dementia and with functional limitations, lower baseline ABI was independently correlated with cognitive function and associated with greater 2-year risk for progression to mild cognitive impairment or probable dementia. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.
Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde
2015-08-28
In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.
Synchronization Analysis of Master-Slave Probabilistic Boolean Networks
Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde
2015-01-01
In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380
Stochastic and deterministic model of microbial heat inactivation.
Corradini, Maria G; Normand, Mark D; Peleg, Micha
2010-03-01
Microbial inactivation is described by a model based on the changing survival probabilities of individual cells or spores. It is presented in a stochastic and discrete form for small groups, and as a continuous deterministic model for larger populations. If the underlying mortality probability function remains constant throughout the treatment, the model generates first-order ("log-linear") inactivation kinetics. Otherwise, it produces survival patterns that include Weibullian ("power-law") with upward or downward concavity, tailing with a residual survival level, complete elimination, flat "shoulder" with linear or curvilinear continuation, and sigmoid curves. In both forms, the same algorithm or model equation applies to isothermal and dynamic heat treatments alike. Constructing the model does not require assuming a kinetic order or knowledge of the inactivation mechanism. The general features of its underlying mortality probability function can be deduced from the experimental survival curve's shape. Once identified, the function's coefficients, the survival parameters, can be estimated directly from the experimental survival ratios by regression. The model is testable in principle but matching the estimated mortality or inactivation probabilities with those of the actual cells or spores can be a technical challenge. The model is not intended to replace current models to calculate sterility. Its main value, apart from connecting the various inactivation patterns to underlying probabilities at the cellular level, might be in simulating the irregular survival patterns of small groups of cells and spores. In principle, it can also be used for nonthermal methods of microbial inactivation and their combination with heat.
Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif
2014-11-01
Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Relative contributions of three descriptive methods: implications for behavioral assessment.
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods-the ABC method, the conditional probability method, and the conditional and background probability method-to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
NASA Astrophysics Data System (ADS)
Qian, Shang-Wu; Gu, Zhi-Yu
2001-12-01
Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution P_L^n for the winding number n and the partition function P_L of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.
NASA Technical Reports Server (NTRS)
Wilson, Lonnie A.
1987-01-01
Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Knot probability of polygons subjected to a force: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Janse van Rensburg, E. J.; Orlandini, E.; Tesi, M. C.; Whittington, S. G.
2008-01-01
We use Monte Carlo methods to study the knot probability of lattice polygons on the cubic lattice in the presence of an external force f. The force is coupled to the span of the polygons along a lattice direction, say the z-direction. If the force is negative polygons are squeezed (the compressive regime), while positive forces tend to stretch the polygons along the z-direction (the tensile regime). For sufficiently large positive forces we verify that the Pincus scaling law in the force-extension curve holds. At a fixed number of edges n the knot probability is a decreasing function of the force. For a fixed force the knot probability approaches unity as 1 - exp(-α0(f)n + o(n)), where α0(f) is positive and a decreasing function of f. We also examine the average of the absolute value of the writhe and we verify the square root growth law (known for f = 0) for all values of f.
Interpretations of Probability in Quantum Mechanics: A Case of "Experimental Metaphysics"
NASA Astrophysics Data System (ADS)
Hellman, Geoffrey
After reviewing paradigmatic cases of "experimental metaphysics" basing inferences against local realism and determinism on experimental tests of Bells theorem (and successors), we concentrate on clarifying the meaning and status of "objective probability" in quantum mechanics. The terms "objective" and "subjective" are found ambiguous and inadequate, masking crucial differences turning on the question of what the numerical values of probability functions measure vs. the question of the nature of the "events" on which such functions are defined. This leads naturally to a 2×2 matrix of types of interpretations, which are then illustrated with salient examples. (Of independent interest are the splitting of "Copenhagen interpretation" into "objective" and "subjective" varieties in one of the dimensions and the splitting of Bohmian hidden variables from (other) modal interpretations along that same dimension.) It is then explained why Everett interpretations are difficult to categorize in these terms. Finally, we argue that Bohmian mechanics does not seriously threaten the experimental-metaphysical case for ultimate randomness and purely physical probabilities.
Advanced reliability methods for structural evaluation
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y.-T.
1985-01-01
Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.
Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J
2018-03-15
The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2016-12-01
A point source on a plane constantly emits particles which rapidly diffuse and then stick to a growing cluster. The growth probability of a cluster is presented as a sum over all possible scenarios leading to the same final shape. The classical point for the action, defined as a minus logarithm of the growth probability, describes the most probable scenario and reproduces the Laplacian growth equation, which embraces numerous fundamental free boundary dynamics in nonequilibrium physics. For nonclassical scenarios we introduce virtual point sources, in which presence the action becomes the Kullback-Leibler entropy. Strikingly, this entropy is shown to be the sum of electrostatic energies of layers grown per elementary time unit. Hence the growth probability of the presented nonequilibrium process obeys the Gibbs-Boltzmann statistics, which, as a rule, is not applied out from equilibrium. Each layer's probability is expressed as a product of simple factors in an auxiliary complex plane after a properly chosen conformal map. The action at this plane is a sum of Robin functions, which solve the Liouville equation. At the end we establish connections of our theory with the τ function of the integrable Toda hierarchy and with the Liouville theory for noncritical quantum strings.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Borras, Ester; Chang, Kyle; Pande, Mala; Cuddy, Amanda; Bosch, Jennifer L; Bannon, Sarah A; Mork, Maureen E; Rodriguez-Bigas, Miguel A; Taggart, Melissa W; Lynch, Patrick M; You, Y Nancy; Vilar, Eduardo
2017-10-01
Lynch syndrome (LS) is a genetic condition secondary to germline alterations in the DNA mismatch repair (MMR) genes with 30% of changes being variants of uncertain significance (VUS). Our aim was to perform an in silico reclassification of VUS from a large single institutional cohort that will help prioritizing functional validation. A total of 54 VUS were detected with 33 (61%) novel variants. We integrated family history, pathology, and genetic information along with supporting evidence from eight different in silico tools at the RNA and protein level. Our assessment allowed us to reclassify 54% (29/54) of the VUS as probably damaging, 13% (7/54) as possibly damaging, and 28% (15/54) as probably neutral. There are more than 1,000 VUS reported in MMR genes and our approach facilitates the prioritization of further functional efforts to assess the pathogenicity to those classified as probably damaging. Cancer Prev Res; 10(10); 580-7. ©2017 AACR . ©2017 American Association for Cancer Research.
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
NASA Astrophysics Data System (ADS)
Hüsami Afşar, Mehdi; Unal Şorman, Ali; Tugrul Yilmaz, Mustafa
2016-04-01
Different drought characteristics (e.g. duration, average severity, and average areal extent) often have monotonic relation that increased magnitude of one often follows a similar increase in the magnitude of the other drought characteristic. Hence it is viable to establish a relationship between different drought characteristics with the goal of predicting one using other ones. Copula functions that relate different variables using their joint and conditional cumulative probability distributions are often used to statistically model the drought characteristics. In this study bivariate and trivariate joint probabilities of these characteristics are obtained over Ankara (Turkey) between 1960 and 2013. Copula-based return period estimation of drought characteristics of duration, average severity, and average areal extent show joint probabilities of these characteristics can be satisfactorily achieved. Among different copula families investigated in this study, elliptical family (i.e. including normal and t-student copula functions) resulted in the lowest root mean square error. "This study was supported by TUBITAK fund #114Y676)."
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Are there common mathematical structures in economics and physics?
NASA Astrophysics Data System (ADS)
Mimkes, Jürgen
2016-12-01
Economics is a field that looks into the future. We may know a few things ahead (ex ante), but most things we only know, afterwards (ex post). How can we work in a field, where much of the important information is missing? Mathematics gives two answers: 1. Probability theory leads to microeconomics: the Lagrange function optimizes utility under constraints of economic terms (like costs). The utility function is the entropy, the logarithm of probability. The optimal result is given by a probability distribution and an integrating factor. 2. Calculus leads to macroeconomics: In economics we have two production factors, capital and labour. This requires two dimensional calculus with exact and not-exact differentials, which represent the "ex ante" and "ex post" terms of economics. An integrating factor turns a not-exact term (like income) into an exact term (entropy, the natural production function). The integrating factor is the same as in microeconomics and turns the not-exact field of economics into an exact physical science.
Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams
NASA Technical Reports Server (NTRS)
Steely, Sidney L.
1993-01-01
The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.
An Application of Probability to Combinatorics: A Proof of Vandermonde Identity
ERIC Educational Resources Information Center
Paolillo, Bonaventura; Rizzo, Piermichele; Vincenzi, Giovanni
2017-01-01
In this paper, we give possible suggestions for a classroom lesson about an application of probability using basic mathematical notions. We will approach to some combinatoric results without using "induction", "polynomial identities" nor "generating functions", and will give a proof of the "Vandermonde…
14 CFR 23.1309 - Equipment, systems, and installations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... compliance with this section with regard to the electrical power system and to equipment design and... the system must be able to supply the following power loads in probable operating combinations and for probable durations: (1) Loads connected to the power distribution system with the system functioning...
Possibilities of forecasting hypercholesterinemia in pilots
NASA Technical Reports Server (NTRS)
Vivilov, P.
1980-01-01
The dependence of the frequency of hypercholesterinemia on the age, average annual flying time, functional category, qualification class, and flying specialty of 300 pilots was investigated. The risk probability coefficient of hypercholesterinemia was computed. An evaluation table was developed which gives an 84% probability of forcasting risk of hypercholesterinemia.
Occupation times and ergodicity breaking in biased continuous time random walks
NASA Astrophysics Data System (ADS)
Bel, Golan; Barkai, Eli
2005-12-01
Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the second objective of this work, we apply different algorithms using three spectral parameters (wave steepness, directional dispersion, frequential dispersion) as predictors, to estimate the probability density function of the kurtosis for a given sea state. ACKNOWLEDGMENTS The authors thank to Puertos del Estado (Spanish Ministry of Public Works) for providing the free surface measurement database.
Guidugli, Lucia; Shimelis, Hermela; Masica, David L; Pankratz, Vernon S; Lipton, Gary B; Singh, Namit; Hu, Chunling; Monteiro, Alvaro N A; Lindor, Noralane M; Goldgar, David E; Karchin, Rachel; Iversen, Edwin S; Couch, Fergus J
2018-01-17
Many variants of uncertain significance (VUS) have been identified in BRCA2 through clinical genetic testing. VUS pose a significant clinical challenge because the contribution of these variants to cancer risk has not been determined. We conducted a comprehensive assessment of VUS in the BRCA2 C-terminal DNA binding domain (DBD) by using a validated functional assay of BRCA2 homologous recombination (HR) DNA-repair activity and defined a classifier of variant pathogenicity. Among 139 variants evaluated, 54 had ≥99% probability of pathogenicity, and 73 had ≥95% probability of neutrality. Functional assay results were compared with predictions of variant pathogenicity from the Align-GVGD protein-sequence-based prediction algorithm, which has been used for variant classification. Relative to the HR assay, Align-GVGD significantly (p < 0.05) over-predicted pathogenic variants. We subsequently combined functional and Align-GVGD prediction results in a Bayesian hierarchical model (VarCall) to estimate the overall probability of pathogenicity for each VUS. In addition, to predict the effects of all other BRCA2 DBD variants and to prioritize variants for functional studies, we used the endoPhenotype-Optimized Sequence Ensemble (ePOSE) algorithm to train classifiers for BRCA2 variants by using data from the HR functional assay. Together, the results show that systematic functional assays in combination with in silico predictors of pathogenicity provide robust tools for clinical annotation of BRCA2 VUS. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
14 CFR 23.1309 - Equipment, systems, and installations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... chapter and that requires a power supply is an “essential load” on the power supply. The power sources and the system must be able to supply the following power loads in probable operating combinations and for probable durations: (1) Loads connected to the power distribution system with the system functioning...
Probability Distributions of Minkowski Distances between Discrete Random Variables.
ERIC Educational Resources Information Center
Schroger, Erich; And Others
1993-01-01
Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)
Relative Contributions of Three Descriptive Methods: Implications for Behavioral Assessment
ERIC Educational Resources Information Center
Pence, Sacha T.; Roscoe, Eileen M.; Bourret, Jason C.; Ahearn, William H.
2009-01-01
This study compared the outcomes of three descriptive analysis methods--the ABC method, the conditional probability method, and the conditional and background probability method--to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior…
Large Deviations: Advanced Probability for Undergrads
ERIC Educational Resources Information Center
Rolls, David A.
2007-01-01
In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2014 CFR
2014-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2011 CFR
2011-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
NASA Astrophysics Data System (ADS)
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.
Benignus, Vernon A; Bushnell, Philip J; Boyes, William K
2011-12-01
Acute solvent exposures may contribute to automobile accidents because they increase reaction time and decrease attention, in addition to impairing other behaviors. These effects resemble those of ethanol consumption, both with respect to behavioral effects and neurological mechanisms. These observations, along with the extensive data on the relationship between ethanol consumption and fatal automobile accidents, suggested a way to estimate the probability of fatal automobile accidents from solvent inhalation. The problem can be approached using the logic of the algebraic transitive postulate of equality: if A=B and B=C, then A=C. We first calculated a function describing the internal doses of solvent vapors that cause the same magnitude of behavioral impairment as ingestion of ethanol (A=B). Next, we fit a function to data from the literature describing the probability of fatal car crashes for a given internal dose of ethanol (B=C). Finally, we used these two functions to generate a third function to estimate the probability of a fatal car crash for any internal dose of organic solvent vapor (A=C). This latter function showed quantitatively (1) that the likelihood of a fatal car crash is increased by acute exposure to organic solvent vapors at concentrations less than 1.0 ppm, and (2) that this likelihood is similar in magnitude to the probability of developing leukemia from exposure to benzene. This approach could also be applied to other potentially adverse consequences of acute exposure to solvents (e.g., nonfatal car crashes, property damage, and workplace accidents), if appropriate data were available. © 2011 Society for Risk Analysis Published 2011. This article is a U.S. Government work and is in the public domain for the U.S.A.
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
RELATIVE CONTRIBUTIONS OF THREE DESCRIPTIVE METHODS: IMPLICATIONS FOR BEHAVIORAL ASSESSMENT
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods—the ABC method, the conditional probability method, and the conditional and background probability method—to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations. PMID:19949536
NASA Technical Reports Server (NTRS)
Querci, F.; Kunde, V. G.; Querci, M.
1971-01-01
The basis and techniques are presented for generating opacity probability distribution functions for the CN molecule (red and violet systems) and the C2 molecule (Swan, Phillips, Ballik-Ramsay systems), two of the more important diatomic molecules in the spectra of carbon stars, with a view to including these distribution functions in equilibrium model atmosphere calculations. Comparisons to the CO molecule are also shown. T he computation of the monochromatic absorption coefficient uses the most recent molecular data with revision of the oscillator strengths for some of the band systems. The total molecular stellar mass absorption coefficient is established through fifteen equations of molecular dissociation equilibrium to relate the distribution functions to each other on a per gram of stellar material basis.
Preoperative PROMIS Scores Predict Postoperative Success in Foot and Ankle Patients.
Ho, Bryant; Houck, Jeff R; Flemister, Adolph S; Ketz, John; Oh, Irvin; DiGiovanni, Benedict F; Baumhauer, Judith F
2016-09-01
The use of patient-reported outcomes continues to expand beyond the scope of clinical research to involve standard of care assessments across orthopedic practices. It is currently unclear how to interpret and apply this information in the daily care of patients in a foot and ankle clinic. We prospectively examined the relationship between preoperative patient-reported outcomes (PROMIS Physical Function, Pain Interference and Depression scores), determined minimal clinical important differences for these values, and assessed if these preoperative values were predictors of improvement after operative intervention. Prospective collection of all consecutive patient visits to a multisurgeon tertiary foot and ankle clinic was obtained between February 2015 and April 2016. This consisted of 16 023 unique visits across 7996 patients, with 3611 new patients. Patients undergoing elective operative intervention were identified by ICD-9 and CPT code. PROMIS physical function, pain interference, and depression scores were assessed at initial and follow-up visits. Minimum clinically important differences (MCIDs) were calculated using a distribution-based method. Receiver operating characteristic (ROC) curves were calculated to determine whether preoperative PROMIS scores were predictive of achieving MCID. Cutoff values for PROMIS scores that would predict achieving MCID and not achieving MCID with 95% specificity were determined. Prognostic pre- and posttest probabilities based off these cutoffs were calculated. Patients with a minimum of 7-month follow-up (mean 9.9) who completed all PROMIS domains were included, resulting in 61 patients. ROC curves demonstrated that preoperative physical function scores were predictive of postoperative improvement in physical function (area under the curve [AUC] 0.83). Similarly, preoperative pain interference scores were predictive of postoperative pain improvement (AUC 0.73) and preoperative depression scores were also predictive of postoperative depression improvement (AUC 0.74). Patients with preoperative physical function T score below 29.7 had an 83% probability of achieving a clinically meaningful improvement in function as defined by MCID. Patients with preoperative physical function T score above 42 had a 94% probability of failing to achieve MCID. Patients with preoperative pain above 67.2 had a 66% probability of achieving MCID, whereas patients with preoperative pain below 55 had a 95% probability of failing to achieve MCID. Patients with preoperative depression below 41.5 had a 90% probability of failing to achieve MCID. Patient-reported outcomes (PROMIS) scores obtained preoperatively predicted improvement in foot and ankle surgery. Threshold levels in physical function, pain interference, and depression can be shared with patients as they decide whether surgery is a good option and helps place a numerical value on patient expectations. Physical function scores below 29.7 were likely to improve with surgery, whereas those patients with scores above 42 were unlikely to make gains in function. Patients with pain scores less than 55 were similarly unlikely to improve, whereas those with scores above 67 had clinically significant pain reduction postoperatively. Reported prognostic cutoff values help to provide guidance to both the surgeon and the patient and can aid in shared decision making for treatment. Level II, prognostic study. © The Author(s) 2016.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.
Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin
2007-07-01
Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
1984-08-01
12. PERSONAL AUTHORISI Hiroshi Sato 13* TYPE OF REPORT TECHNICAL 13b. TIME COVERED PROM TO 14. OATE OF REPORT (Yr. Mo., Day) Aug. 1984...nectuary and identify by bloc* number) Let p and p.. be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1...THIS PAGE ABSTRACT Let y and y1 be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1] posed the
Wei Wu; Charlesb Hall; Lianjun Zhang
2006-01-01
We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as âthe percentage of the area covered by clouds in each pixel on the mapâ in this paper) at any hour and any place is a function of three topographic variables...
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Pair Production Induced by Ultrashort and Ultraintense Laser Pulses in Plasmas
NASA Astrophysics Data System (ADS)
Luo, Yue-E.; Wang, Xue-Wen; Wang, Yuan-Sheng; Ji, Shen-Tong; Yu, Hong
2018-06-01
The probability of Schwinger pair production is calculated, which is induced by an ultraintense and ultrashort laser pulse propagating in a plasma. The dependence of the probability on the amplitude of the laser pulse and the frequency of plasmas is analyzed. Particularly, the effect of the pulse duration on the probability is discussed, by introducing a pulse-shape function to describe the temporal shape of the laser pulse. The results show that a laser with shorter pulse is more efficient in pair production. The probability of pair production increases when the order of the duration is comparable to the period of a laser.
NASA Astrophysics Data System (ADS)
Liland, Kristian Hovde; Snipen, Lars
When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.
Finite-size scaling of survival probability in branching processes
NASA Astrophysics Data System (ADS)
Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro
2015-04-01
Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro
2015-11-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. Copyright © 2015 the American Physiological Society.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N.; Iijima, Toshio
2015-01-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201
NASA Astrophysics Data System (ADS)
Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2017-12-01
We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.
Probability and the changing shape of response distributions for orientation.
Anderson, Britt
2014-11-18
Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.
The gravitational law of social interaction
NASA Astrophysics Data System (ADS)
Levy, Moshe; Goldenberg, Jacob
2014-01-01
While a great deal is known about the topology of social networks, there is much less agreement about the geographical structure of these networks. The fundamental question in this context is: how does the probability of a social link between two individuals depend on the physical distance between them? While it is clear that the probability decreases with the distance, various studies have found different functional forms for this dependence. The exact form of the distance dependence has crucial implications for network searchability and dynamics: Kleinberg (2000) [15] shows that the small-world property holds if the probability of a social link is a power-law function of the distance with power -2, but not with any other power. We investigate the distance dependence of link probability empirically by analyzing four very different sets of data: Facebook links, data from the electronic version of the Small-World experiment, email messages, and data from detailed personal interviews. All four datasets reveal the same empirical regularity: the probability of a social link is proportional to the inverse of the square of the distance between the two individuals, analogously to the distance dependence of the gravitational force. Thus, it seems that social networks spontaneously converge to the exact unique distance dependence that ensures the Small-World property.
Dissecting the Function of Hippocampal Oscillations in a Human Anxiety Model
Khemka, Saurabh
2017-01-01
Neural oscillations in hippocampus and medial prefrontal cortex (mPFC) are a hallmark of rodent anxiety models that build on conflict between approach and avoidance. Yet, the function of these oscillations, and their expression in humans, remain elusive. Here, we used magnetoencephalography (MEG) to investigate neural oscillations in a task that simulated approach–avoidance conflict, wherein 23 male and female human participants collected monetary tokens under a threat of virtual predation. Probability of threat was signaled by color and learned beforehand by direct experience. Magnitude of threat corresponded to a possible monetary loss, signaled as a quantity. We focused our analyses on an a priori defined region-of-interest, the bilateral hippocampus. Oscillatory power under conflict was linearly predicted by threat probability in a location consistent with right mid-hippocampus. This pattern was specific to the hippocampus, most pronounced in the gamma band, and not explained by spatial movement or anxiety-like behavior. Gamma power was modulated by slower theta rhythms, and this theta modulation increased with threat probability. Furthermore, theta oscillations in the same location showed greater synchrony with mPFC theta with increased threat probability. Strikingly, these findings were not seen in relation to an increase in threat magnitude, which was explicitly signaled as a quantity and induced similar behavioral responses as learned threat probability. Thus, our findings suggest that the expression of hippocampal and mPFC oscillatory activity in the context of anxiety is specifically linked to threat memory. These findings resonate with neurocomputational accounts of the role played by hippocampal oscillations in memory. SIGNIFICANCE STATEMENT We use a biologically relevant approach–avoidance conflict test in humans while recording neural oscillations with magnetoencephalography to investigate the expression and function of hippocampal oscillations in human anxiety. Extending nonhuman studies, we can assign a possible function to hippocampal oscillations in this task, namely threat memory communication. This blends into recent attempts to elucidate the role of brain synchronization in defensive responses to threat. PMID:28626018
Saber, Hamidreza; Rajah, Gary B; Kherallah, Riyad Y; Jadhav, Ashutosh P; Narayanan, Sandra
2017-12-15
Mechanical thrombectomy (MT) is increasingly used for large-vessel occlusions (LVO), but randomized clinical trial (RCT) level data with regard to differences in clinical outcomes of MT devices are limited. We conducted a network meta-analysis (NMA) that enables comparison of modern MT devices (Trevo, Solitaire, Aspiration) and strategies (stent retriever vs aspiration) across trials. Relevant RCTs were identified by a systematic review. The efficacy outcome was 90-day functional independence (modified Rankin Scale (mRS) score 0-2). Safety outcomes were 90-day catastrophic outcome (mRS 5-6) and symptomatic intracranial hemorrhage (sICH). Fixed-effect Bayesian NMA was performed to calculate risk estimates and the rank probabilities. In a NMA of six relevant RCTs (SWIFT, TREVO2, EXTEND-IA, SWIFT-PRIME, REVASCAT, THERAPY; total of 871 patients, 472 Solitaire vs medical-only, 108 Aspiration vs medical-only, 178 Trevo vs Merci, and 113 Solitaire vs Merci) with medical-only arm as the reference, Trevo had the greatest functional independence (OR 4.14, 95% credible interval (CrI) 1.41-11.80; top rank probability 92%) followed by Solitaire (OR 2.55, 95% CrI 1.75-3.74; top rank probability 72%). Solitaire and Aspiration devices had the greatest top rank probability with respect to low sICH and catastrophic outcomes (76% and 91%, respectively), but without significant differences between each other. In a separate network of seven RCTs (MR-CLEAN, ESCAPE, EXTEND-IA, SWIFT-PRIME, REVASCAT, THERAPY, ASTER; 1737 patients), first-line stent retriever was associated with a higher top rank probability of functional independence than aspiration (95% vs 54%), with comparable safety outcomes. These findings suggest that Trevo and Solitaire devices are associated with a greater likelihood of functional independence whereas Solitaire and Aspiration devices appear to be safer. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Identifying uniformly mutated segments within repeats.
Sahinalp, S Cenk; Eichler, Evan; Goldberg, Paul; Berenbrink, Petra; Friedetzky, Tom; Ergun, Funda
2004-12-01
Given a long string of characters from a constant size alphabet we present an algorithm to determine whether its characters have been generated by a single i.i.d. random source. More specifically, consider all possible n-coin models for generating a binary string S, where each bit of S is generated via an independent toss of one of the n coins in the model. The choice of which coin to toss is decided by a random walk on the set of coins where the probability of a coin change is much lower than the probability of using the same coin repeatedly. We present a procedure to evaluate the likelihood of a n-coin model for given S, subject a uniform prior distribution over the parameters of the model (that represent mutation rates and probabilities of copying events). In the absence of detailed prior knowledge of these parameters, the algorithm can be used to determine whether the a posteriori probability for n=1 is higher than for any other n>1. Our algorithm runs in time O(l4logl), where l is the length of S, through a dynamic programming approach which exploits the assumed convexity of the a posteriori probability for n. Our test can be used in the analysis of long alignments between pairs of genomic sequences in a number of ways. For example, functional regions in genome sequences exhibit much lower mutation rates than non-functional regions. Because our test provides means for determining variations in the mutation rate, it may be used to distinguish functional regions from non-functional ones. Another application is in determining whether two highly similar, thus evolutionarily related, genome segments are the result of a single copy event or of a complex series of copy events. This is particularly an issue in evolutionary studies of genome regions rich with repeat segments (especially tandemly repeated segments).
Individual-tree probability of survival model for the Northeastern United States
Richard M. Teck; Donald E. Hilt
1990-01-01
Describes a distance-independent individual-free probability of survival model for the Northeastern United States. Survival is predicted using a sixparameter logistic function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for variability in annual survival due to species, tree size, site quality, and the tree...
Hold It! The Influence of Lingering Rewards on Choice Diversification and Persistence
ERIC Educational Resources Information Center
Schulze, Christin; van Ravenzwaaij, Don; Newell, Ben R.
2017-01-01
Learning to choose adaptively when faced with uncertain and variable outcomes is a central challenge for decision makers. This study examines repeated choice in dynamic probability learning tasks in which outcome probabilities changed either as a function of the choices participants made or independently of those choices. This presence/absence of…
A new algorithm for finding survival coefficients employed in reliability equations
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Flehinger, B. J.
1973-01-01
Product reliabilities are predicted from past failure rates and reasonable estimate of future failure rates. Algorithm is used to calculate probability that product will function correctly. Algorithm sums the probabilities of each survival pattern and number of permutations for that pattern, over all possible ways in which product can survive.
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
The high order dispersion analysis based on first-passage-time probability in financial markets
NASA Astrophysics Data System (ADS)
Liu, Chenggong; Shang, Pengjian; Feng, Guochen
2017-04-01
The study of first-passage-time (FPT) event about financial time series has gained broad research recently, which can provide reference for risk management and investment. In this paper, a new measurement-high order dispersion (HOD)-is developed based on FPT probability to explore financial time series. The tick-by-tick data of three Chinese stock markets and three American stock markets are investigated. We classify the financial markets successfully through analyzing the scaling properties of FPT probabilities of six stock markets and employing HOD method to compare the differences of FPT decay curves. It can be concluded that long-range correlation, fat-tailed broad probability density function and its coupling with nonlinearity mainly lead to the multifractality of financial time series by applying HOD method. Furthermore, we take the fluctuation function of multifractal detrended fluctuation analysis (MF-DFA) to distinguish markets and get consistent results with HOD method, whereas the HOD method is capable of fractionizing the stock markets effectively in the same region. We convince that such explorations are relevant for a better understanding of the financial market mechanisms.
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
A method for modeling bias in a person's estimates of likelihoods of events
NASA Technical Reports Server (NTRS)
Nygren, Thomas E.; Morera, Osvaldo
1988-01-01
It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha
1998-09-01
Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.
Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, J.; Bowman, J.M.
1996-05-01
The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less
Game-theoretic strategies for asymmetric networked systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less
Transition probabilities of health states for workers in Malaysia using a Markov chain model
NASA Astrophysics Data System (ADS)
Samsuddin, Shamshimah; Ismail, Noriszura
2017-04-01
The aim of our study is to estimate the transition probabilities of health states for workers in Malaysia who contribute to the Employment Injury Scheme under the Social Security Organization Malaysia using the Markov chain model. Our study uses four states of health (active, temporary disability, permanent disability and death) based on the data collected from the longitudinal studies of workers in Malaysia for 5 years. The transition probabilities vary by health state, age and gender. The results show that men employees are more likely to have higher transition probabilities to any health state compared to women employees. The transition probabilities can be used to predict the future health of workers in terms of a function of current age, gender and health state.
Fowler, A.C.; Flint, Paul L.
1997-01-01
Following an oil spill off St Paul Island, Alaska in February 1996, persistence rates and detection probabilities of oiled king eider (Somateria spectabilis) carcasses were estimated using the Cormack-Jolly-Seber model. Carcass persistence rates varied by day, beach type and sex, while detection probabilities varied by day and beach type. Scavenging, wave action and weather influenced carcass persistence. The patterns of persistence differed on rock and sand beaches and female carcasses had a different persistence function than males. Weather, primarily snow storms, and degree of carcass scavenging, diminished carcass detectability. Detection probabilities on rock beaches were lower and more variable than on sand beaches. The combination of persistence rates and detection probabilities can be used to improve techniques of estimating total mortality.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
Search for function coefficient distribution in traditional Chinese medicine network
NASA Astrophysics Data System (ADS)
He, Yue; Zhang, Peipei; Sun, Anzheng; Su, Beibei; He, Da-Ren
2004-03-01
We suggest a model for a simulation on development of traditional Chinese medicine system. Suppose there are a certain number of Chinese medicines. Each of them is given randomly a "function coefficient", which has a value between 0 and 1. The larger it is the stronger is its function for solving one healthy problem and serving as an "emperor" in a prescription formulation. The smaller it is the stronger is its function for harmonizing and/or accessorizing a prescription formulation. In every step of time a new medicine is discovered. With a probability, P(m), which is determined according to our statistical investigation results, it can produce a new prescription formulation with other m-1 medicines. We assume that the probability for choosing the function coefficients of these m medicines follow a distribution function, which is everywhere smooth. A program has been set up to perform a search for this function form so that the simulation results show a best agreement to our statistical data. We believe the result function form will be helpful for an understanding on real development of traditional Chinese medicine system.
Obidziński, Michał; Nieznański, Marek
2017-10-01
The presented research was conducted in order to investigate the connections between developmental dyslexia and the functioning of verbatim and gist memory traces-assumed in the fuzzy-trace theory. The participants were 71 high school students (33 with dyslexia and 38 without learning difficulties). The modified procedure and multinomial model of Stahl and Klauer (simplified conjoint recognition model) was used to collect and analyze data. Results showed statistically significant differences in four of the model parameters: (a) the probability of verbatim trace recollection upon presentation of orthographically similar stimulus was higher in the control than dyslexia group, (b) the probability of verbatim trace recollection upon presentation of semantically similar stimulus was higher in the control than dyslexia group, (c) the probability of gist trace retrieval upon presentation of semantically similar stimulus was higher in the dyslexia than control group, and (d) the probability of gist trace retrieval upon target stimulus presentation (in the semantic condition) was higher in the control than dyslexia group. The obtained results suggest differences of memory functioning in terms of verbatim and gist trace retrieval between people with and without dyslexia on specific, elementary cognitive processes postulated by the fuzzy-trace theory. These can indicate new approaches in the education of persons with developmental dyslexia, focused on specific impairments and the strengths of their memory functioning.
Defense Strategies for Asymmetric Networked Systems with Discrete Components.
Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun
2018-05-03
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.
Labudda, Kirsten; Woermann, Friedrich G; Mertens, Markus; Pohlmann-Eden, Bernd; Markowitsch, Hans J; Brand, Matthias
2008-06-01
Recent functional neuroimaging and lesion studies demonstrate the involvement of the orbitofrontal/ventromedial prefrontal cortex as a key structure in decision making processes. This region seems to be particularly crucial when contingencies between options and consequences are unknown but have to be learned by the use of feedback following previous decisions (decision making under ambiguity). However, little is known about the neural correlates of decision making under risk conditions in which information about probabilities and potential outcomes is given. In the present study, we used functional magnetic resonance imaging to measure blood-oxygenation-level-dependent (BOLD) responses in 12 subjects during a decision making task. This task provided explicit information about probabilities and associated potential incentives. The responses were compared to BOLD signals in a control condition without information about incentives. In contrast to previous decision making studies, we completely removed the outcome phase following a decision to exclude the potential influence of feedback previously received on current decisions. The results indicate that the integration of information about probabilities and incentives leads to activations within the dorsolateral prefrontal cortex, the posterior parietal lobe, the anterior cingulate and the right lingual gyrus. We assume that this pattern of activation is due to the involvement of executive functions, conflict detection mechanisms and arithmetic operations during the deliberation phase of decisional processes that are based on explicit information.
Defense Strategies for Asymmetric Networked Systems with Discrete Components
Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.
2018-01-01
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
An epidemic model of rumor diffusion in online social networks
NASA Astrophysics Data System (ADS)
Cheng, Jun-Jun; Liu, Yun; Shen, Bo; Yuan, Wei-Guo
2013-01-01
So far, in some standard rumor spreading models, the transition probability from ignorants to spreaders is always treated as a constant. However, from a practical perspective, the case that individual whether or not be infected by the neighbor spreader greatly depends on the trustiness of ties between them. In order to solve this problem, we introduce a stochastic epidemic model of the rumor diffusion, in which the infectious probability is defined as a function of the strength of ties. Moreover, we investigate numerically the behavior of the model on a real scale-free social site with the exponent γ = 2.2. We verify that the strength of ties plays a critical role in the rumor diffusion process. Specially, selecting weak ties preferentially cannot make rumor spread faster and wider, but the efficiency of diffusion will be greatly affected after removing them. Another significant finding is that the maximum number of spreaders max( S) is very sensitive to the immune probability μ and the decay probability v. We show that a smaller μ or v leads to a larger spreading of the rumor, and their relationships can be described as the function ln(max( S)) = Av + B, in which the intercept B and the slope A can be fitted perfectly as power-law functions of μ. Our findings may offer some useful insights, helping guide the application in practice and reduce the damage brought by the rumor.
Radar sea reflection for low-e targets
NASA Astrophysics Data System (ADS)
Chow, Winston C.; Groves, Gordon W.
1998-09-01
Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.
Hughes, G; Burnett, F J; Havis, N D
2013-11-01
Disease risk curves are simple graphical relationships between the probability of need for treatment and evidence related to risk factors. In the context of the present article, our focus is on factors related to the occurrence of disease in crops. Risk is the probability of adverse consequences; specifically in the present context it denotes the chance that disease will reach a threshold level at which crop protection measures can be justified. This article describes disease risk curves that arise when risk is modeled as a function of more than one risk factor, and when risk is modeled as a function of a single factor (specifically the level of disease at an early disease assessment). In both cases, disease risk curves serve as calibration curves that allow the accumulated evidence related to risk to be expressed on a probability scale. When risk is modeled as a function of the level of disease at an early disease assessment, the resulting disease risk curve provides a crop loss assessment model in which the downside is denominated in terms of risk rather than in terms of yield loss.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Exaggerated risk: prospect theory and probability weighting in risky choice.
Kusev, Petko; van Schaik, Paul; Ayton, Peter; Dent, John; Chater, Nick
2009-11-01
In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and Kahneman's (1992) prospect theory, we found that the weighting function required to model precautionary decisions differed from that required for monetary gambles. This result indicates a failure of the descriptive invariance axiom of expected utility theory. For precautionary decisions, people overweighted small, medium-sized, and moderately large probabilities-they exaggerated risks. This effect is not anticipated by prospect theory or experience-based decision research (Hertwig, Barron, Weber, & Erev, 2004). We found evidence that exaggerated risk is caused by the accessibility of events in memory: The weighting function varies as a function of the accessibility of events. This suggests that people's experiences of events leak into decisions even when risk information is explicitly provided. Our findings highlight a need to investigate how variation in decision content produces variation in preferences for risk.
Wu, Shih-Wei; Delgado, Mauricio R.; Maloney, Laurence T.
2011-01-01
In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent motor lotteries. The probability of each outcome in a motor lottery is determined by the subject’s noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks where information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks where probability was implicit in the subjects’ own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex (PCC) quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk. PMID:21677166
Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T
2011-06-15
In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent "motor lotteries." The probability of each outcome in a motor lottery is determined by the subject's noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks in which information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks in which probability was implicit in the subjects' own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk.
Sheehan, David V; Mancini, Michele; Wang, Jianing; Berggren, Lovisa; Cao, Haijun; Dueñas, Héctor José; Yue, Li
2016-01-01
We compared functional impairment outcomes assessed with Sheehan Disability Scale (SDS) after treatment with duloxetine versus selective serotonin reuptake inhibitors (SSRIs) in patients with major depressive disorder. Data were pooled from four randomized studies comparing treatment with duloxetine and SSRIs (three double blind and one open label). Analysis of covariance, with last-observation-carried-forward approach for missing data, explored treatment differences between duloxetine and SSRIs on SDS changes during 8 to 12 weeks of acute treatment for the intent-to-treat population. Logistic regression analysis examined the predictive capacity of baseline patient characteristics for remission in functional impairment (SDS total score ≤ 6 and SDS item scores ≤ 2) at endpoint. Included were 2193 patients (duloxetine n = 1029; SSRIs n = 835; placebo n = 329). Treatment with duloxetine and SSRIs resulted in significantly (p < 0.01) greater improvements in the SDS total score versus treatment with placebo. Higher SDS (p < 0.0001) or 17-item Hamilton Depression Rating Scale baseline scores (p < 0.01) predicted lower probability of functional improvement after treatment with duloxetine or SSRIs. Female gender (p ≤ 0.05) predicted higher probability of functional improvement after treatment with duloxetine or SSRIs. Treatment with SSRIs and duloxetine improved functional impairment in patients with major depressive disorder. Higher SDS or 17-item Hamilton Depression Rating Scale baseline scores predicted less probability of SDS improvement; female gender predicted better improvement in functional impairment at endpoint. © 2015 The Authors. Human Psychopharmacology: Clinical and Experimental published by John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; Brucken, E.; Devoto, F.
We search for resonant production of tt pairs in 4.8 fb{sup -1} integrated luminosity of pp collision data at {radical}(s)=1.96 TeV in the lepton+jets decay channel, where one top quark decays leptonically and the other hadronically. A matrix-element reconstruction technique is used; for each event a probability density function of the tt candidate invariant mass is sampled. These probability density functions are used to construct a likelihood function, whereby the cross section for resonant tt production is estimated, given a hypothetical resonance mass and width. The data indicate no evidence of resonant production of tt pairs. A benchmark model ofmore » leptophobic Z{sup '}{yields}tt is excluded with m{sub Z}{sup '}<900 GeV/c{sup 2} at 95% confidence level.« less
ERIC Educational Resources Information Center
Balasooriya, Uditha; Li, Jackie; Low, Chan Kee
2012-01-01
For any density function (or probability function), there always corresponds a "cumulative distribution function" (cdf). It is a well-known mathematical fact that the cdf is more general than the density function, in the sense that for a given distribution the former may exist without the existence of the latter. Nevertheless, while the…
Fuzzy rationality and parameter elicitation in decision analysis
NASA Astrophysics Data System (ADS)
Nikolova, Natalia D.; Tenekedjiev, Kiril I.
2010-07-01
It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.
Local Directed Percolation Probability in Two Dimensions
NASA Astrophysics Data System (ADS)
Inui, Norio; Konno, Norio; Komatsu, Genichi; Kameoka, Koichi
1998-01-01
Using the series expansion method and Monte Carlo simulation,we study the directed percolation probability on the square lattice Vn0=\\{ (x,y) \\in {Z}2:x+y=even, 0 ≤ y ≤ n, - y ≤ x ≤ y \\}.We calculate the local percolationprobability Pnl defined as the connection probability between theorigin and a site (0,n). The critical behavior of P∞lis clearly different from the global percolation probability P∞g characterized by a critical exponent βg.An analysis based on the Padé approximants shows βl=2βg.In addition, we find that the series expansion of P2nl can be expressed as a function of Png.
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
Persistence Probability Analyzed on the Taiwan STOCK Market
NASA Astrophysics Data System (ADS)
Chen, I.-Chun; Chen, Hung-Jung; Tseng, Hsen-Che
We report a numerical study of the Taiwan stock market, in which we used three data sources: the daily Taiwan stock exchange index (TAIEX) from January 1983 to May 2006, the daily OTC index from January 1995 to May 2006, and the one-min intraday data from February 2000 to December 2003. Our study is based on numerical estimates of persistence exponent θp, Hurst exponent H2, and fluctuation exponent h2. We also discuss the results concerning persistence probability P(t), qth-order price-price correlation function Gq(t), and qth-order normalized fluctuation function fq(t) among these indices.
Units of analysis and kinetic structure of behavioral repertoires
Thompson, Travis; Lubinski, David
1986-01-01
It is suggested that molar streams of behavior are constructed of various arrangements of three elementary constituents (elicited, evoked, and emitted response classes). An eight-cell taxonomy is elaborated as a framework for analyzing and synthesizing complex behavioral repertoires based on these functional units. It is proposed that the local force binding functional units into a smoothly articulated kinetic sequence arises from temporally arranged relative response probability relationships. Behavioral integration is thought to reflect the joint influence of the organism's hierarchy of relative response probabilities, fluctuating biological states, and the arrangement of environmental and behavioral events in time. PMID:16812461
NASA Astrophysics Data System (ADS)
Gilmanshin, I. R.; Kirpichnikov, A. P.
2017-09-01
In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.
New method for estimating low-earth-orbit collision probabilities
NASA Technical Reports Server (NTRS)
Vedder, John D.; Tabor, Jill L.
1991-01-01
An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
Cellular Automata Generalized To An Inferential System
NASA Astrophysics Data System (ADS)
Blower, David J.
2007-11-01
Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.
Recovery after local extinction: factors affecting re-establishment of alpine lake zooplankton.
Knapp, Roland A; Sarnelle, Orlando
2008-12-01
The introduction of fishes into naturally fishless mountain lakes often results in the extirpation of large-bodied zooplankton species. The ability to predict whether or not particular species will recover following fish removal is critically important for the design and implementation of lake restoration efforts but is currently not possible because of a lack of information on what factors affect recovery. The objective of this study was to identify the factors influencing recovery probability in two large-bodied zooplankton species following fish removal. We predicted that (1) Daphnia melanica would have a higher probability of recovery than Hesperodiaptomus shoshone due to differences in reproductive mode (D. melanica is parthenogenetic, H. shoshone is obligately sexual), (2) recovery probability would be a decreasing function of fish residence time due to the negative relationship between fish residence time and size of the egg bank, and (3) recovery probability would be an increasing function of lake depth as a consequence of a positive relationship between lake depth and egg bank size. To test these predictions, we sampled contemporary zooplankton populations and collected paleolimnological data from 44 naturally fishless lakes that were stocked with trout for varying lengths of time before reverting to a fishless condition. D. melanica had a significantly higher probability of recovery than did H. shoshone (0.82 vs. 0.54, respectively). The probability of recovery for H. shoshone was also significantly influenced by lake depth, fish residence time, and elevation, but only elevation influenced the probability of recovery in D. melanica. These results are consistent with between-species differences in reproductive mode combined with the much greater longevity of diapausing eggs in D. melanica than in H. shoshone. Our data also suggest that H. shoshone will often fail to recover in lakes with fish residence times exceeding 50 years.
Analyzing the homeland security of the U.S.-Mexico border.
Wein, Lawrence M; Liu, Yifan; Motskin, Arik
2009-05-01
We develop a mathematical optimization model at the intersection of homeland security and immigration, that chooses various immigration enforcement decision variables to minimize the probability that a terrorist can successfully enter the United States across the U.S.-Mexico border. Included are a discrete choice model for the probability that a potential alien crosser will attempt to cross the U.S.-Mexico border in terms of the likelihood of success and the U.S. wage for illegal workers, a spatial model that calculates the apprehension probability as a function of the number of crossers, the number of border patrol agents, and the amount of surveillance technology on the border, a queueing model that determines the probability that an apprehended alien will be detained and removed as a function of the number of detention beds, and an equilibrium model for the illegal wage that balances the supply and demand for work and incorporates the impact of worksite enforcement. Our main result is that detention beds are the current system bottleneck (even after the large reduction in detention residence times recently achieved by expedited removal), and increases in border patrol staffing or surveillance technology would not provide any improvements without a large increase in detention capacity. Our model also predicts that surveillance technology is more cost effective than border patrol agents, which in turn are more cost effective than worksite inspectors, but these results are not robust due to the difficulty of predicting human behavior from existing data. Overall, the probability that a terrorist can successfully enter the United States is very high, and it would be extremely costly and difficult to significantly reduce it. We also investigate the alternative objective function of minimizing the flow of illegal aliens across the U.S.-Mexico border, and obtain qualitatively similar results.
Taillefumier, Thibaud; Magnasco, Marcelo O
2013-04-16
Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.
Autonomous learning derived from experimental modeling of physical laws.
Grabec, Igor
2013-05-01
This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.
Comparison of clinician-predicted to measured low vision outcomes.
Chan, Tiffany L; Goldstein, Judith E; Massof, Robert W
2013-08-01
To compare low-vision rehabilitation (LVR) clinicians' predictions of the probability of success of LVR with patients' self-reported outcomes after provision of usual outpatient LVR services and to determine if patients' traits influence clinician ratings. The Activity Inventory (AI), a self-report visual function questionnaire, was administered pre-and post-LVR to 316 low-vision patients served by 28 LVR centers that participated in a collaborative observational study. The physical component of the Short Form-36, Geriatric Depression Scale, and Telephone Interview for Cognitive Status were also administered pre-LVR to measure physical capability, depression, and cognitive status. After patient evaluation, 38 LVR clinicians estimated the probability of outcome success (POS) using their own criteria. The POS ratings and change in functional ability were used to assess the effects of patients' baseline traits on predicted outcomes. A regression analysis with a hierarchical random-effects model showed no relationship between LVR physician POS estimates and AI-based outcomes. In another analysis, kappa statistics were calculated to determine the probability of agreement between POS and AI-based outcomes for different outcome criteria. Across all comparisons, none of the kappa values were significantly different from 0, which indicates that the rate of agreement is equivalent to chance. In an exploratory analysis, hierarchical mixed-effects regression models show that POS ratings are associated with information about the patient's cognitive functioning and the combination of visual acuity and functional ability, as opposed to visual acuity or functional ability alone. Clinicians' predictions of LVR outcomes seem to be influenced by knowledge of patients' cognitive functioning and the combination of visual acuity and functional ability-information clinicians acquire from the patient's history and examination. However, clinicians' predictions do not agree with observed changes in functional ability from the patient's perspective; they are no better than chance.
Mode switching in volcanic seismicity: El Hierro 2011-2013
NASA Astrophysics Data System (ADS)
Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.
2016-05-01
The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.
NASA Astrophysics Data System (ADS)
Dufty, J. W.
1984-09-01
Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.
Greenhow, Anna K; Hunt, Maree J; Macaskill, Anne C; Harper, David N
2015-09-01
Delay and uncertainty of receipt both reduce the subjective value of reinforcers. Delay has a greater impact on the subjective value of smaller reinforcers than of larger ones while the reverse is true for uncertainty. We investigated the effect of reinforcer magnitude on discounting of delayed and uncertain reinforcers using a novel approach: embedding relevant choices within a computer game. Participants made repeated choices between smaller, certain, immediate outcomes and larger, but delayed or uncertain outcomes while experiencing the result of each choice. Participants' choices were generally well described by the hyperbolic discounting function. Smaller numbers of points were discounted more steeply than larger numbers as a function of delay but not probability. The novel experiential choice task described is a promising approach to investigating both delay and probability discounting in humans. © Society for the Experimental Analysis of Behavior.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
A Bayesian Approach to Magnetic Moment Determination Using μSR
NASA Astrophysics Data System (ADS)
Blundell, S. J.; Steele, A. J.; Lancaster, T.; Wright, J. D.; Pratt, F. L.
A significant challenge in zero-field μSR experiments arises from the uncertainty in the muon site. It is possible to calculate the dipole field (and hence precession frequency v) at any particular site given the magnetic moment μ and magnetic structure. One can also evaluate f(v), the probability distribution function of v assuming that the muon site can be anywhere within the unit cell with equal probability, excluding physically forbidden sites. Since v is obtained from experiment, what we would like to know is g(μjv), the probability density function of μ given the observed v. This can be obtained from our calculated f(v/μ) using Bayes' theorem. We describe an approach to this problem which we have used to extract information about real systems including a low-moment osmate compound, a family of molecular magnets, and an iron-arsenide compound.
NASA Astrophysics Data System (ADS)
Xia, Xintao; Wang, Zhongyu
2008-10-01
For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.
Epidemic spreading between two coupled subpopulations with inner structures
NASA Astrophysics Data System (ADS)
Ruan, Zhongyuan; Tang, Ming; Gu, Changgui; Xu, Jinshan
2017-10-01
The structure of underlying contact network and the mobility of agents are two decisive factors for epidemic spreading in reality. Here, we study a model consisting of two coupled subpopulations with intra-structures that emphasizes both the contact structure and the recurrent mobility pattern of individuals simultaneously. We show that the coupling of the two subpopulations (via interconnections between them and round trips of individuals) makes the epidemic threshold in each subnetwork to be the same. Moreover, we find that the interconnection probability between two subpopulations and the travel rate are important factors for spreading dynamics. In particular, as a function of interconnection probability, the epidemic threshold in each subpopulation decreases monotonously, which enhances the risks of an epidemic. While the epidemic threshold displays a non-monotonic variation as travel rate increases. Moreover, the asymptotic infected density as a function of travel rate in each subpopulation behaves differently depending on the interconnection probability.
The condition of a finite Markov chain and perturbation bounds for the limiting probabilities
NASA Technical Reports Server (NTRS)
Meyer, C. D., Jr.
1979-01-01
The inequalities bounding the relative error the norm of w- w squiggly/the norm of w are exhibited by a very simple function of E and A. Let T denote the transition matrix of an ergodic chain, C, and let A = I - T. Let E be a perturbation matrix such that T squiggly = T - E is also the transition matrix of an ergodic chain, C squiggly. Let w and w squiggly denote the limiting probability (row) vectors for C and C squiggly. The inequality is the best one possible. This bound can be significant in the numerical determination of the limiting probabilities for an ergodic chain. In addition to presenting a sharp bound for the norm of w-w squiggly/the norm of w an explicit expression for w squiggly will be derived in which w squiggly is given as a function of E, A, w and some other related terms.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; ...
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
ERIC Educational Resources Information Center
Kuperman, Victor; Bresnan, Joan
2012-01-01
In a series of seven studies, this paper examines acoustic characteristics of the spontaneous speech production of the English dative alternation ("gave the book to the boy/ the boy the book") as a function of the probability of the choice between alternating constructions. Probabilistic effects on the acoustic duration were observed in the…
The Probability of Hitting a Polygonal Target
1981-04-01
required for the use of this method for coalputing the probability of hitting d polygonal target. These functions are 1. PHIT (called by user’s main progran...2. FIJ (called by PHIT ) 3. FUN (called by FIJ) The user must include all three of these in his main program, but needs only to call PHIT . The
Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD
ERIC Educational Resources Information Center
Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne
2015-01-01
We recorded visual event-related brain potentials from 32 adult male participants (16 high-functioning participants diagnosed with autism spectrum disorder (ASD) and 16 control participants, ranging in age from 18 to 53 years) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability…
Relationships of Stress Exposures to Health in Gulf War Veterans
2004-10-01
traumatic events and post - traumatic stress disorder . In: Nutt D, Zohar J, Davidson J (eds). Post ...if such subgroups could be distinguished with respect to Gulf War exposures and probable posttraumatic stress disorder ( PTSD ). Additionally, we... stress disorder ( PTSD ). Additionally, we sought to examine the functional consequences of specific patterns of ill-health and probable PTSD ten
VizieR Online Data Catalog: Proper motions of PM2000 open clusters (Krone-Martins+, 2010)
NASA Astrophysics Data System (ADS)
Krone-Martins, A.; Soubiran, C.; Ducourant, C.; Teixeira, R.; Le Campion, J. F.
2010-04-01
We present lists of proper-motions and kinematic membership probabilities in the region of 49 open clusters or possible open clusters. The stellar proper motions were taken from the Bordeaux PM2000 catalogue. The segregation between cluster and field stars and the assignment of membership probabilities was accomplished by applying a fully automated method based on parametrisations for the probability distribution functions and genetic algorithm optimisation heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. (3 data files).
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.
An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.
Gap probability - Measurements and models of a pecan orchard
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI
1992-01-01
Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.
Probabilistic Evaluation of Blade Impact Damage
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Abumeri, G. H.
2003-01-01
The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.
On the timing problem in optical PPM communications.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1971-01-01
Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.
The case of escape probability as linear in short time
NASA Astrophysics Data System (ADS)
Marchewka, A.; Schuss, Z.
2018-02-01
We derive rigorously the short-time escape probability of a quantum particle from its compactly supported initial state, which has a discontinuous derivative at the boundary of the support. We show that this probability is linear in time, which seems to be a new result. The novelty of our calculation is the inclusion of the boundary layer of the propagated wave function formed outside the initial support. This result has applications to the decay law of the particle, to the Zeno behaviour, quantum absorption, time of arrival, quantum measurements, and more.
Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Fu, Bina; Zhang, Dong H.
2013-11-01
The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the title molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.
NASA Technical Reports Server (NTRS)
Soneira, R. M.; Bahcall, J. N.
1981-01-01
Probabilities are calculated for acquiring suitable guide stars (GS) with the fine guidance system (FGS) of the space telescope. A number of the considerations and techniques described are also relevant for other space astronomy missions. The constraints of the FGS are reviewed. The available data on bright star densities are summarized and a previous error in the literature is corrected. Separate analytic and Monte Carlo calculations of the probabilities are described. A simulation of space telescope pointing is carried out using the Weistrop north galactic pole catalog of bright stars. Sufficient information is presented so that the probabilities of acquisition can be estimated as a function of position in the sky. The probability of acquiring suitable guide stars is greatly increased if the FGS can allow an appreciable difference between the (bright) primary GS limiting magnitude and the (fainter) secondary GS limiting magnitude.
ERIC Educational Resources Information Center
Rachlin, Howard
2006-01-01
In general, if a variable can be expressed as a function of its own maximum value, that function may be called a discount function. Delay discounting and probability discounting are commonly studied in psychology, but memory, matching, and economic utility also may be viewed as discounting processes. When they are so viewed, the discount function…
USDA-ARS?s Scientific Manuscript database
1. Plant functional traits provide a mechanistic basis for understanding ecological variation among plant species and the implications of this variation for species distribution, community assembly and restoration. 2. The bulk of our functional trait understanding, however, is centered on traits rel...
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By emphasizing and modelling detection probabilities, we now know: (i) that eradication of this species by searching is possible, (ii) how much searching effort would be required, (iii) under what environmental conditions searching would be most efficient, and (iv) several factors that are likely to modulate this quantification when searching is applied to new areas. The same approach can be use for evaluation of any control technology or population monitoring programme. ?? 2009 The Authors. Journal compilation ?? 2009 British Ecological Society.
NASA Astrophysics Data System (ADS)
van Dam, A.; Gettel, G. M.; Kipkemboi, J.; Rahman, M. M.
2011-12-01
Papyrus wetlands in East Africa provide ecosystem services supporting the livelihoods of millions but are rapidly degrading due to economic development. For ecosystem conservation, an integrated understanding of the natural and social processes driving ecosystem change is needed. This research focuses on integrating the causal relationships between hydrology, ecosystem function, and livelihood sustainability in Nyando wetland, western Kenya. Livelihood sustainability is based on ecosystem services that include plant and animal harvest for building material and food, conversion of wetlands to crop and grazing land, water supply, and water quality regulation. Specific objectives were: to integrate studies of hydrology, ecology, and livelihood activities using a Bayesian Network (BN) model and include stakeholder involvement in model development. The BN model (Netica 4.16) had 35 nodes with seven decision nodes describing demography, economy, papyrus market, and rainfall, and two target nodes describing ecosystem function (defined by groundwater recharge, nutrient and sediment retention, and biodiversity) and livelihood sustainability (drinking water supply, crop production, livestock production, and papyrus yield). The conditional probability tables were populated using results of ecohydrological and socio-economic field work and consultations with stakeholders. The model was evaluated for an average year with decision node probabilities set according to data from research, expert opinion, and stakeholders' views. Then, scenarios for dry and wet seasons and for economic development (low population growth and unemployment) and policy development (more awareness of wetland value) were evaluated. In an average year, the probability for maintaining a "good" level of sediment and nutrient retention functions, groundwater recharge, and biodiversity was about 60%. ("Good" is defined by expert opinion based on ongoing field research.) In the dry season, the probability was reduced to about 40% and in the wet season increased to about 85%. Both ecosystem functions and livelihood sustainability were most sensitive to flooding and the human pressure, notably the area of crop conversion, grazing pressure, and papyrus harvest. Flooded conditions limit cropping, livestock herding and vegetation harvesting but have a strong positive effect on ecosystem function. Preliminary results suggest that the effects of economic and policy development on ecosystem function and livelihood sustainability were negligible, but more data on these aspects will be included in further model development. The advantage of this modeling approach, which integrates data from hydrological, ecological, and socio-economic studies, is that it highlights the relative effect of hydrologic conditions and socio-economic pressures on ecosystem function. This model is static, however, with long-term changes in climate and exploitation levels superimposed on seasonal hydrology dynamics. Further work should address this issue as well as further constrain probabilities at each node as field research continues.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
Robust Bayesian decision theory applied to optimal dosage.
Abraham, Christophe; Daurès, Jean-Pierre
2004-04-15
We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.
Entropy of finite random binary sequences with weak long-range correlations.
Melnik, S S; Usatenko, O V
2014-11-01
We study the N-step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.
Entropy of finite random binary sequences with weak long-range correlations
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2014-11-01
We study the N -step binary stationary ergodic Markov chain and analyze its differential entropy. Supposing that the correlations are weak we express the conditional probability function of the chain through the pair correlation function and represent the entropy as a functional of the pair correlator. Since the model uses the two-point correlators instead of the block probability, it makes it possible to calculate the entropy of strings at much longer distances than using standard methods. A fluctuation contribution to the entropy due to finiteness of random chains is examined. This contribution can be of the same order as its regular part even at the relatively short lengths of subsequences. A self-similar structure of entropy with respect to the decimation transformations is revealed for some specific forms of the pair correlation function. Application of the theory to the DNA sequence of the R3 chromosome of Drosophila melanogaster is presented.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
NASA Technical Reports Server (NTRS)
Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C..; Nielsen, J. E.
1999-01-01
We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE] on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [about 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.
NASA Technical Reports Server (NTRS)
Cerniglia, M. C.; Douglass, A. R.; Rood, R. B.; Sparling, L. C.; Nielsen, J. E.
1999-01-01
We present a study of the distribution of ozone in the lowermost stratosphere with the goal of understanding the relative contribution to the observations of air of either distinctly tropospheric or stratospheric origin. The air in the lowermost stratosphere is divided into two population groups based on Ertel's potential vorticity at 300 hPa. High [low] potential vorticity at 300 hPa suggests that the tropopause is low [high], and the identification of the two groups helps to account for dynamic variability. Conditional probability distribution functions are used to define the statistics of the mix from both observations and model simulations. Two data sources are chosen. First, several years of ozonesonde observations are used to exploit the high vertical resolution. Second, observations made by the Halogen Occultation Experiment [HALOE) on the Upper Atmosphere Research Satellite [UARS] are used to understand the impact on the results of the spatial limitations of the ozonesonde network. The conditional probability distribution functions are calculated at a series of potential temperature surfaces spanning the domain from the midlatitude tropopause to surfaces higher than the mean tropical tropopause [approximately 380K]. Despite the differences in spatial and temporal sampling, the probability distribution functions are similar for the two data sources. Comparisons with the model demonstrate that the model maintains a mix of air in the lowermost stratosphere similar to the observations. The model also simulates a realistic annual cycle. By using the model, possible mechanisms for the maintenance of mix of air in the lowermost stratosphere are revealed. The relevance of the results to the assessment of the environmental impact of aircraft effluence is discussed.
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
St. Clair, Caryn; Norwitz, Errol R.; Woensdregt, Karlijn; Cackovic, Michael; Shaw, Julia A.; Malkus, Herbert; Ehrenkranz, Richard A.; Illuzzi, Jessica L.
2011-01-01
We sought to define the risk of neonatal respiratory distress syndrome (RDS) as a function of both lecithin/sphingomyelin (L/S) ratio and gestational age. Amniotic fluid L/S ratio data were collected from consecutive women undergoing amniocentesis for fetal lung maturity at Yale-New Haven Hospital from January 1998 to December 2004. Women were included in the study if they delivered a live-born, singleton, nonanomalous infant within 72 hours of amniocentesis. The probability of RDS was modeled using multivariate logistic regression with L/S ratio and gestational age as predictors. A total of 210 mother-neonate pairs (8 RDS, 202 non-RDS) met criteria for analysis. Both gestational age and L/S ratio were independent predictors of RDS. A probability of RDS of 3% or less was noted at an L/S ratio cutoff of ≥3.4 at 34 weeks, ≥2.6 at 36 weeks, ≥1.6 at 38 weeks, and ≥1.2 at term. Under 34 weeks of gestation, the prevalence of RDS was so high that a probability of 3% or less was not observed by this model. These data describe a means of stratifying the probability of neonatal RDS using both gestational age and the L/S ratio and may aid in clinical decision making concerning the timing of delivery. PMID:18773379
Wang, Xiao-Sheng; Peng, Chun-Zi; Cai, Wei-Jun; Xia, Jian; Jin, Daozhong; Dai, Yuqiao; Luo, Xue-Gang; Klyachko, Vitaly A.; Deng, Pan-Yue
2014-01-01
Transcriptional silencing of the Fmr1 gene encoding fragile X mental retardation protein (FMRP) causes Fragile X Syndrome (FXS), the most common form of inherited intellectual disability and the leading genetic cause of autism. FMRP has been suggested to play important roles in regulating neurotransmission and short-term synaptic plasticity at excitatory hippocampal and cortical synapses. However, the origins and the mechanisms of these FMRP actions remain incompletely understood, and the role of FMRP in regulating synaptic release probability and presynaptic function remains debated. Here we used variance-mean analysis and peak scaled nonstationary variance analysis to examine changes in both pre- and postsynaptic parameters during repetitive activity at excitatory CA3-CA1 hippocampal synapses in a mouse model of FXS. Our analyses revealed that loss of FMRP did not affect the basal release probability or basal synaptic transmission, but caused an abnormally elevated release probability specifically during repetitive activity. These abnormalities were not accompanied by changes in EPSC kinetics, quantal size or postsynaptic AMPA receptor conductance. Our results thus indicate that FMRP regulates neurotransmission at excitatory hippocampal synapses specifically during repetitive activity via modulation of release probability in a presynaptic manner. Our study suggests that FMRP function in regulating neurotransmitter release is an activity-dependent phenomenon that may contribute to the pathophysiology of FXS. PMID:24646437
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Preobrazhenskaia, L A; Ioffe, M E; Mats, V N
2004-01-01
The role of the prefrontal cortex was investigated on the reaction of the active choice of the two feeders under changes value and probability reinforcement. The experiments were performed on 2 dogs with prefrontal ablation (g. proreus). Before the lesions the dogs were taught to receive food in two different feeders to conditioned stimuli with equally probable alimentary reinforcement. After ablation in the inter-trial intervals the dogs were running from the one feeder to another. In the answer to conditioned stimuli for many times the dogs choose the same feeder. The disturbance of the behavior after some times completely restored. In the experiments with competition of probability events and values of reinforcement the dogs chose the feeder with low-probability but better quality of reinforcement. In the experiments with equal value but different probability the intact dogs chose the feeder with higher probability. In our experiments the dogs with prefrontal lesions chose the each feeder equiprobably. Thus in condition of free behavior one of different functions of the prefrontal cortex is the reactions choose with more probability of reinforcement.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Prior probability modulates anticipatory activity in category-specific areas.
Trapp, Sabrina; Lepsien, Jöran; Kotz, Sonja A; Bar, Moshe
2016-02-01
Bayesian models are currently a dominant framework for describing human information processing. However, it is not clear yet how major tenets of this framework can be translated to brain processes. In this study, we addressed the neural underpinning of prior probability and its effect on anticipatory activity in category-specific areas. Before fMRI scanning, participants were trained in two behavioral sessions to learn the prior probability and correct order of visual events within a sequence. The events of each sequence included two different presentations of a geometric shape and one picture of either a house or a face, which appeared with either a high or a low likelihood. Each sequence was preceded by a cue that gave participants probabilistic information about which items to expect next. This allowed examining cue-related anticipatory modulation of activity as a function of prior probability in category-specific areas (fusiform face area and parahippocampal place area). Our findings show that activity in the fusiform face area was higher when faces had a higher prior probability. The finding of a difference between levels of expectations is consistent with graded, probabilistically modulated activity, but the data do not rule out the alternative explanation of a categorical neural response. Importantly, these differences were only visible during anticipation, and vanished at the time of stimulus presentation, calling for a functional distinction when considering the effects of prior probability. Finally, there were no anticipatory effects for houses in the parahippocampal place area, suggesting sensitivity to stimulus material when looking at effects of prediction.
Mondal, Mukti; Sarkar, Kaushik; Nath, Partha Pratim; Paul, Goutam
2018-02-01
The aim of the present study was to examine the effect of monosodium glutamate (MSG) on the functions of ovary and uterus in rat. Virgin female rats of Charles Foster strain (120 gms approximately) were administrated MSG by oral gavage at a dose level of 0.8, 1.6, 2.4 gm/kgBW/day, respectively for 30 and 40 days duration. We observed a significant decrease in the duration of proestrus, estrus and metestrus phases, and increase in the duration of diestrus phase and diestrus index compared to control. We found significant increase in the levels of serum LH, FSH and estradiol in test groups of rat. We also observed significant increase in the number of primary and primordial follicles, increase in the size of graafian follicle, and decrease in the size of corpus luteum. Further, we have seen significant increase in the activities SOD, CAT and GST, decrease in the activities GR and GPx, and decrease MDA level in MSG exposed groups. These results suggest that MSG impairs the functions of the ovary probably by augmenting the release of FSH, LH and estradiol; promoting the follicular maturation and improving the biochemical mechanism for antioxidant defense. We also observed significant potentiation of the force of contraction of uterus in estrus, metestrus and diestrus phases. This result suggests that MSG potentiates the contraction of uterus probably by stimulating the estradiol sensitivity to oxytocin. From the results it is concluded that MSG suppresses the female reproductive function in rat probably by impairing the functions of ovary and uterus. © 2017 Wiley Periodicals, Inc.
Velocity statistics of the Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
NASA Astrophysics Data System (ADS)
Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.
2009-05-01
The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.
Probability density function approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1994-01-01
The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.
Velocity statistics of the Nagel-Schreckenberg model.
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
Atom-counting in High Resolution Electron Microscopy:TEM or STEM - That's the question.
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2017-03-01
In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Waiting for the Bus: When Base-Rates Refuse to Be Neglected
ERIC Educational Resources Information Center
Teigen, Karl Halvor; Keren, Gideon
2007-01-01
The paper reports the results from 16 versions of a simple probability estimation task, where probability estimates derived from base-rate information have to be modified by case knowledge. In the bus problem [adapted from Falk, R., Lipson, A., & Konold, C. (1994), the ups and downs of the hope function in a fruitless search. In G. Wright & P.…
Mortality, or Probability of Death, from a Suicidal Act in the United States
ERIC Educational Resources Information Center
Friedmann, Harry; Kohn, Robert
2008-01-01
The probability of death resulting from a suicidal act as a function of age is explored. Until recently, data on suicide attempts in the United States were not available, and therefore the relationship between attempts and completed suicide could not be systematically investigated. Now, with new surveillance of self-harm data from the Centers for…
Shauna M. Uselman; Keirith A. Snyder; Elizabeth A. Leger; Sara E. Duke
2015-01-01
Comparing emergence and survival probabilities, early seral natives generally outperformed late seral natives when growing with exotics and had earlier emergence timing, although results differed among functional groups and soil types. In contrast, survival probabilities did not differ between the early and late seral mixes when growing without exotics. Within each...
Time Neutron Technique for UXO Discrimination
2010-12-01
mixture of TNT and RDX C-4 CFD Composition 4 military plastic explosive Constant Fraction Discriminator Cps CsI counts per second inorganic...Pdfs Probability Density Functions PET Positron Emission Tomography Pfa Probability of False Alarm PFTNA Pulsed Fast/Thermal Neutron Analysis PMTs...the ordnance type (rocket, mortar , projectile, etc.) and what filler material it contains (inert or empty), practice, HE, illumination, chemical (i.e
A simulator for evaluating methods for the detection of lesion-deficit associations
NASA Technical Reports Server (NTRS)
Megalooikonomou, V.; Davatzikos, C.; Herskovits, E. H.
2000-01-01
Although much has been learned about the functional organization of the human brain through lesion-deficit analysis, the variety of statistical and image-processing methods developed for this purpose precludes a closed-form analysis of the statistical power of these systems. Therefore, we developed a lesion-deficit simulator (LDS), which generates artificial subjects, each of which consists of a set of functional deficits, and a brain image with lesions; the deficits and lesions conform to predefined distributions. We used probability distributions to model the number, sizes, and spatial distribution of lesions, to model the structure-function associations, and to model registration error. We used the LDS to evaluate, as examples, the effects of the complexities and strengths of lesion-deficit associations, and of registration error, on the power of lesion-deficit analysis. We measured the numbers of recovered associations from these simulated data, as a function of the number of subjects analyzed, the strengths and number of associations in the statistical model, the number of structures associated with a particular function, and the prior probabilities of structures being abnormal. The number of subjects required to recover the simulated lesion-deficit associations was found to have an inverse relationship to the strength of associations, and to the smallest probability in the structure-function model. The number of structures associated with a particular function (i.e., the complexity of associations) had a much greater effect on the performance of the analysis method than did the total number of associations. We also found that registration error of 5 mm or less reduces the number of associations discovered by approximately 13% compared to perfect registration. The LDS provides a flexible framework for evaluating many aspects of lesion-deficit analysis.
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Fixation probability of a nonmutator in a large population of asexual mutators.
Jain, Kavita; James, Ananthu
2017-11-21
In an adapted population of mutators in which most mutations are deleterious, a nonmutator that lowers the mutation rate is under indirect selection and can sweep to fixation. Using a multitype branching process, we calculate the fixation probability of a rare nonmutator in a large population of asexual mutators. We show that when beneficial mutations are absent, the fixation probability is a nonmonotonic function of the mutation rate of the mutator: it first increases sublinearly and then decreases exponentially. We also find that beneficial mutations can enhance the fixation probability of a nonmutator. Our analysis is relevant to an understanding of recent experiments in which a reduction in the mutation rates has been observed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Luxton, Gary; Keall, Paul J; King, Christopher R
2008-01-07
To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
Downlink Probability Density Functions for EOS-McMurdo Sound
NASA Technical Reports Server (NTRS)
Christopher, P.; Jackson, A. H.
1996-01-01
The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.
Cheng, Ruey-Kuang; Meck, Warren H.
2009-01-01
Choline supplementation of the maternal diet has a long-term facilitative effect on timing and temporal memory of the offspring. To further delineate the impact of early nutritional status on interval timing, we examined effects of prenatal-choline supplementation on the temporal sensitivity of adult (6 mo) male rats. Rats that were given sufficient choline in their chow (CON: 1.1 g/kg) or supplemental choline added to their drinking water (SUP: 3.5 g/kg) during embryonic days (ED) 12–17 were trained with a peak-interval procedure that was shifted among 75%, 50%, and 25% probabilities of reinforcement with transitions from 18s –> 36s –>72s temporal criteria. Prenatal-choline supplementation systematically sharpened interval-timing functions by reducing the associative/non-temporal response enhancing effects of reinforcement probability on the Start response threshold, thereby reducing non-scalar sources of variance in the left-hand portion of the Gaussian-shaped response functions. No effect was observed for the Stop response threshold as a function of any of these manipulations. In addition, independence of peak time and peak rate was demonstrated as a function of reinforcement probability for both prenatal-choline supplemented and control rats. Overall, these results suggest that prenatal-choline supplementation facilitates timing by reducing impulsive responding early in the interval, thereby improving the superimposition of peak functions for different temporal criteria. PMID:17996223
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
Covering Numbers for Semicontinuous Functions
2016-04-29
functions, epi-distance, Attouch-Wets topology, epi-convergence, epi-spline, approximation theory . Date: April 29, 2016 1 Introduction Covering numbers of...classes of functions play central roles in parts of information theory , statistics, and applications such as machine learning; see for example [26...probability theory because there the hypo-distance metrizes weak convergence of distribution functions on IRd, which obviously are usc [22]. Thus, as an
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Numeracy moderates the influence of task-irrelevant affect on probability weighting.
Traczyk, Jakub; Fulawka, Kamil
2016-06-01
Statistical numeracy, defined as the ability to understand and process statistical and probability information, plays a significant role in superior decision making. However, recent research has demonstrated that statistical numeracy goes beyond simple comprehension of numbers and mathematical operations. On the contrary to previous studies that were focused on emotions integral to risky prospects, we hypothesized that highly numerate individuals would exhibit more linear probability weighting because they would be less biased by incidental and decision-irrelevant affect. Participants were instructed to make a series of insurance decisions preceded by negative (i.e., fear-inducing) or neutral stimuli. We found that incidental negative affect increased the curvature of the probability weighting function (PWF). Interestingly, this effect was significant only for less numerate individuals, while probability weighting in more numerate people was not altered by decision-irrelevant affect. We propose two candidate mechanisms for the observed effect. Copyright © 2016 Elsevier B.V. All rights reserved.
Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianhui; Fu, Bina; Zhang, Dong H., E-mail: zhangdh@dicp.ac.cn
The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the titlemore » molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.« less
NASA Technical Reports Server (NTRS)
Tinto, Massimo; Armstrong, J. W.
1991-01-01
Massive coalescing binary systems are candidate sources of gravitational radiation in the millihertz frequency band accessible to spacecraft Doppler tracking experiments. This paper discusses signal processing and detection probability for waves from coalescing binaries in the regime where the signal frequency increases linearly with time, i.e., 'chirp' signals. Using known noise statistics, thresholds with given false alarm probabilities are established for one- and two-spacecraft experiments. Given the threshold, the detection probability is calculated as a function of gravitational wave amplitude for both one- and two-spacecraft experiments, assuming random polarization states and under various assumptions about wave directions. This allows quantitative statements about the detection efficiency of these experiments and the utility of coincidence experiments. In particular, coincidence probabilities for two-spacecraft experiments are insensitive to the angle between the directions to the two spacecraft, indicating that near-optical experiments can be done without constraints on spacecraft trajectories.
Characteristic length of the knotting probability revisited
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2015-09-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Wu, Wei; Wang, Jin
2013-09-28
We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.
Concurrent variation of response bias and sensitivity in an operant-psychophysical test.
NASA Technical Reports Server (NTRS)
Terman, M.; Terman, J. S.
1972-01-01
The yes-no signal detection procedure was applied to a single-response operant paradigm in which rats discriminated between a standard auditory intensity and attenuated comparison values. The payoff matrix was symmetrical (with reinforcing brain stimulation for correct detections and brief time-out for errors), but signal probability and intensity differences were varied to generate a family of isobias and isosensitivity functions. The d' parameter remained fairly constant across a wide range of bias levels. Isobias functions deviated from a strict matching strategy as discrimination difficulty increased, although an orderly relation was maintained between signal probability value and the degree and direction of response bias.
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2005-01-01
A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.
Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes
NASA Astrophysics Data System (ADS)
Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.
2017-12-01
We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.
Generalized Dynamic Equations Related to Condensation and Freezing Processes
NASA Astrophysics Data System (ADS)
Wang, Xingrong; Huang, Yong
2018-01-01
The generalized thermodynamic equation related to condensation and freezing processes was derived by introducing the condensation and freezing probability function into the dynamic framework based on the statistical thermodynamic fluctuation theory. As a result, the physical mechanism of some weather phenomena covered by using
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
NASA Astrophysics Data System (ADS)
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
Iconic hand gestures and the predictability of words in context in spontaneous speech.
Beattie, G; Shovelton, H
2000-11-01
This study presents a series of empirical investigations to test a theory of speech production proposed by Butterworth and Hadar (1989; revised in Hadar & Butterworth, 1997) that iconic gestures have a functional role in lexical retrieval in spontaneous speech. Analysis 1 demonstrated that words which were totally unpredictable (as measured by the Shannon guessing technique) were more likely to occur after pauses than after fluent speech, in line with earlier findings. Analysis 2 demonstrated that iconic gestures were associated with words of lower transitional probability than words not associated with gesture, even when grammatical category was controlled. This therefore provided new supporting evidence for Butterworth and Hadar's claims that gestures' lexical affiliates are indeed unpredictable lexical items. However, Analysis 3 found that iconic gestures were not occasioned by lexical accessing difficulties because although gestures tended to occur with words of significantly lower transitional probability, these lower transitional probability words tended to be uttered quite fluently. Overall, therefore, this study provided little evidence for Butterworth and Hadar's theoretical claim that the main function of the iconic hand gestures that accompany spontaneous speech is to assist in the process of lexical access. Instead, such gestures are reconceptualized in terms of communicative function.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
NASA Astrophysics Data System (ADS)
Schwartz, Craig R.; Thelen, Brian J.; Kenton, Arthur C.
1995-06-01
A statistical parametric multispectral sensor performance model was developed by ERIM to support mine field detection studies, multispectral sensor design/performance trade-off studies, and target detection algorithm development. The model assumes target detection algorithms and their performance models which are based on data assumed to obey multivariate Gaussian probability distribution functions (PDFs). The applicability of these algorithms and performance models can be generalized to data having non-Gaussian PDFs through the use of transforms which convert non-Gaussian data to Gaussian (or near-Gaussian) data. An example of one such transform is the Box-Cox power law transform. In practice, such a transform can be applied to non-Gaussian data prior to the introduction of a detection algorithm that is formally based on the assumption of multivariate Gaussian data. This paper presents an extension of these techniques to the case where the joint multivariate probability density function of the non-Gaussian input data is known, and where the joint estimate of the multivariate Gaussian statistics, under the Box-Cox transform, is desired. The jointly estimated multivariate Gaussian statistics can then be used to predict the performance of a target detection algorithm which has an associated Gaussian performance model.
Probabilities and statistics for backscatter estimates obtained by a scatterometer
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.
Diffraction and quantum control of wave functions in nonresonant two-photon absorption
NASA Astrophysics Data System (ADS)
Li, Baihong; Pang, Huafeng; Wang, Doudou; Zhang, Tao; Dong, Ruifang; Li, Yongfang
2018-03-01
In this study, the nonresonant two-photon absorption process in a two-level atom, induced by a weak chirped pulse, is theoretically investigated in the frequency domain. An analytical expression of the wave function expressed by Fresnel functions is obtained, and the two-photon transition probability (TPTP) versus the integral bandwidth, spectral width, and chirp parameter is analyzed. The results indicate that the oscillation evolution of the TPTP result from quantum diffraction of the wave function, which can be explained by analogy with Fresnel diffraction from a wide slit in the spatial domain. Moreover, the ratio between the real and imaginary parts of the excited state wave function and, hence, the atomic polarization, can be controlled by the initial phase of the excitation pulse. In some special initial phase of the excitation pulse, the wave functions with purely real or imaginary parts can be obtained by measuring the population probability. This work provides a novel perspective for understanding the physical details of the interactions between atoms and chirped light pulses in the multiphoton process.
An evaluation of procedures to estimate monthly precipitation probabilities
NASA Astrophysics Data System (ADS)
Legates, David R.
1991-01-01
Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.
Eye light flashes on the Mir space station.
Avdeev, S; Bidoli, V; Casolino, M; De Grandis, E; Furano, G; Morselli, A; Narici, L; De Pascale, M P; Picozza, P; Reali, E; Sparvoli, R; Boezio, M; Carlson, P; Bonvicini, W; Vacchi, A; Zampa, N; Castellini, G; Fuglesang, C; Galper, A; Khodarovich, A; Ozerov, Yu; Popov, A; Vavilov, N; Mazzenga, G; Ricci, M; Sannita, W G; Spillantini, P
2002-04-01
The phenomenon of light flashes (LF) in eyes for people in space has been investigated onboard Mir. Data on particles hitting the eye have been collected with the SilEye detectors, and correlated with human observations. It is found that a nucleus in the radiation environment of Mir has roughly a 1% probability to cause an LF, whereas the proton probability is almost three orders of magnitude less. As a function of LET, the LF probability increases above 10 keV/micrometer, reaching about 5% at around 50 keV/micrometer. c 2002 Elsevier Science Ltd. All rights reserved.
Probability theory for 3-layer remote sensing in ideal gas law environment.
Ben-David, Avishai; Davidson, Charles E
2013-08-26
We extend the probability model for 3-layer radiative transfer [Opt. Express 20, 10004 (2012)] to ideal gas conditions where a correlation exists between transmission and temperature of each of the 3 layers. The effect on the probability density function for the at-sensor radiances is surprisingly small, and thus the added complexity of addressing the correlation can be avoided. The small overall effect is due to (a) small perturbations by the correlation on variance population parameters and (b) cancellation of perturbation terms that appear with opposite signs in the model moment expressions.
Evaluation of a combined index of optic nerve structure and function for glaucoma diagnosis
2011-01-01
Background The definitive diagnosis of glaucoma is currently based on congruent damage to both optic nerve structure and function. Given widespread quantitative assessment of both structure (imaging) and function (automated perimetry) in glaucoma, it should be possible to combine these quantitative data to diagnose disease. We have therefore defined and tested a new approach to glaucoma diagnosis by combining imaging and visual field data, using the anatomical organization of retinal ganglion cells. Methods Data from 1499 eyes of glaucoma suspects and 895 eyes with glaucoma were identified at a single glaucoma center. Each underwent Heidelberg Retinal Tomograph (HRT) imaging and standard automated perimetry. A new measure combining these two tests, the structure function index (SFI), was defined in 3 steps: 1) calculate the probability that each visual field point is abnormal, 2) calculate the probability of abnormality for each of the six HRT optic disc sectors, and 3) combine those probabilities with the probability that a field point and disc sector are linked by ganglion cell anatomy. The SFI was compared to the HRT and visual field using receiver operating characteristic (ROC) analysis. Results The SFI produced an area under the ROC curve (0.78) that was similar to that for both visual field mean deviation (0.78) and pattern standard deviation (0.80) and larger than that for a normalized measure of HRT rim area (0.66). The cases classified as glaucoma by the various tests were significantly non-overlapping. Based on the distribution of test values in the population with mild disease, the SFI may be better able to stratify this group while still clearly identifying those with severe disease. Conclusions The SFI reflects the traditional clinical diagnosis of glaucoma by combining optic nerve structure and function. In doing so, it identifies a different subset of patients than either visual field testing or optic nerve head imaging alone. Analysis of prospective data will allow us to determine whether the combined index of structure and function can provide an improved standard for glaucoma diagnosis. PMID:21314957
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Wikstrom, Erik A; McKeon, Patrick O
2017-04-01
Therapeutic modalities that stimulate sensory receptors around the foot-ankle complex improve chronic ankle instability (CAI)-associated impairments. However, not all patients have equal responses to these modalities. Identifying predictors of treatment success could improve clinician efficiency when treating patients with CAI. To conduct a response analysis on existing data to identify predictors of improved self-reported function in patients with CAI. Secondary analysis of a randomized controlled clinical trial. Sports medicine research laboratories. Fifty-nine patients with CAI, which was defined in accordance with the International Ankle Consortium recommendations. Participants were randomized into 3 treatment groups (plantar massage [PM], ankle-joint mobilization [AJM], or calf stretching [CS]) that received six 5-minute treatments over 2 weeks. Treatment success, defined as a patient exceeding the minimally clinically important difference of the Foot and Ankle Ability Measure-Sport (FAAM-S). Patients with ≤5 recurrent sprains and ≤82.73% on the Foot and Ankle Ability Measure had a 98% probability of having a meaningful FAAM-S improvement after AJM. As well, ≥5 balance errors demonstrated 98% probability of meaningful FAAM-S improvements from AJM. Patients <22 years old and with ≤9.9 cm of dorsiflexion had a 99% probability of a meaningful FAAM-S improvement after PM. Also, those who made ≥2 single-limb-stance errors had a 98% probability of a meaningful FAAM-S improvement from PM. Patients with ≤53.1% on the FAAM-S had an 83% probability of a meaningful FAAM-S improvement after CS. Each sensory-targeted ankle-rehabilitation strategy resulted in a unique combination of predictors of success for patients with CAI. Specific indicators of success with AJM were deficits in self-reported function, single-limb balance, and <5 previous sprains. Age, weight-bearing-dorsiflexion restrictions, and single-limb balance deficits identified patients with CAI who will respond well to PM. Assessing self-reported sport-related function can identify CAI patients who will respond positively to CS.
Probability and Statistics in Sensor Performance Modeling
2010-12-01
language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of
[Air conducted ocular VEMP: II. First clinical investigations].
Walther, L E; Schaaf, H; Sommer, D; Hörmann, K
2011-10-01
Vestibular-evoked myogenic potentials (VEMP) are widely used to assess vestibular function. Air conducted (AC) cervical VEMP (cVEMP) reflect sacculus and inferior vestibular nerve function. Ocular VEMP (oVEMP) however has been hardly examined up to now. In recent studies it has been assumed that AC oVEMP probably reflects superior vestibular nerve function. The aim of this pilot study was to evaluate clinical application of the AC oVEMP. AC oVEMP were recorded in patients with peripheral vestibular disorders (n=21). In addition thermal irritation and head impulse test were performed and AC cVEMP were recorded. For intense AC-sound stimulation tone bursts (500 Hz) with 100 dB nHL were used. In peripheral vestibular disorders AC oVEMP and AC cVEMP could be classified into: • type 1 (inferior vestibular neuritis) with loss of AC oVEMP but normal AC cVEMP, • type 2, probable type of superior vestibular neuritis, showing present AC cVEMP but loss of AC oVEMP, • type 3, probable complete vestibular neuritis, without AC oVEMP and AC cVEMP. AC oVEMP may be used as an appropriate test for clinical investigation in patients with vestibular disorders. AC oVEMP is an additional, essential test for assessing otolith function beside AC cVEMP. Further vestibular test are necessary for precise clinical interpretation. © Georg Thieme Verlag KG Stuttgart · New York.
The (virtual) conceptual necessity of quantum probabilities in cognitive psychology.
Blutner, Reinhard; beim Graben, Peter
2013-06-01
We propose a way in which Pothos & Busemeyer (P&B) could strengthen their position. Taking a dynamic stance, we consider cognitive tests as functions that transfer a given input state into the state after testing. Under very general conditions, it can be shown that testable properties in cognition form an orthomodular lattice. Gleason's theorem then yields the conceptual necessity of quantum probabilities (QP).
NASA Technical Reports Server (NTRS)
Clarke, C. A.; Brown, E. L.
1980-01-01
The possible effects of free carbon fibers on aircraft avionic equipment operation, removal costs, and safety were investigated. Possible carbon fiber flow paths, flow rates, and transfer functions into the Boeing 707, 727, 737, 747 aircraft and potentially vulnerable equipment were identified. Probabilities of equipment removal and probabilities of aircraft exposure to carbon fiber were derived.
NASA Astrophysics Data System (ADS)
Gontis, V.; Kononovicius, A.
2017-10-01
We address the problem of long-range memory in the financial markets. There are two conceptually different ways to reproduce power-law decay of auto-correlation function: using fractional Brownian motion as well as non-linear stochastic differential equations. In this contribution we address this problem by analyzing empirical return and trading activity time series from the Forex. From the empirical time series we obtain probability density functions of burst and inter-burst duration. Our analysis reveals that the power-law exponents of the obtained probability density functions are close to 3 / 2, which is a characteristic feature of the one-dimensional stochastic processes. This is in a good agreement with earlier proposed model of absolute return based on the non-linear stochastic differential equations derived from the agent-based herding model.
NASA Astrophysics Data System (ADS)
Lugauer, F. P.; Stiehl, T. H.; Zaeh, M. F.
Modern laser systems are widely used in industry due to their excellent flexibility and high beam intensities. This leads to an increased hazard potential, because conventional laser safety barriers only offer a short protection time when illuminated with high laser powers. For that reason active systems are used more and more to prevent accidents with laser machines. These systems must fulfil the requirements of functional safety, e.g. according to IEC 61508, which causes high costs. The safety provided by common passive barriers is usually unconsidered in this context. In the presented approach, active and passive systems are evaluated from a holistic perspective. To assess the functional safety of hybrid safety systems, the failure probability of passive barriers is analysed and added to the failure probability of the active system.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.
Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-09-15
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks
Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-01-01
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818
Measurement Model Nonlinearity in Estimation of Dynamical Systems
NASA Astrophysics Data System (ADS)
Majji, Manoranjan; Junkins, J. L.; Turner, J. D.
2012-06-01
The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.
Wave theory of turbulence in compressible media (acoustic theory of turbulence)
NASA Technical Reports Server (NTRS)
Kentzer, C. P.
1975-01-01
The generation and the transmission of sound in turbulent flows are treated as one of the several aspects of wave propagation in turbulence. Fluid fluctuations are decomposed into orthogonal Fourier components, with five interacting modes of wave propagation: two vorticity modes, one entropy mode, and two acoustic modes. Wave interactions, governed by the inhomogeneous and nonlinear terms of the perturbed Navier-Stokes equations, are modeled by random functions which give the rates of change of wave amplitudes equal to the averaged interaction terms. The statistical framework adopted is a quantum-like formulation in terms of complex distribution functions. The spatial probability distributions are given by the squares of the absolute values of the complex characteristic functions. This formulation results in nonlinear diffusion-type transport equations for the probability densities of the five modes of wave propagation.
Liu, Xian; Engel, Charles C
2012-12-20
Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, J. T.; Hughes, R. O.; Escher, J. E.
This technical report documents the surrogate reaction method and experimental results used to determine the desired neutron induced cross sections of 87Y(n,g) and the known 90Zr(n,g) cross section. This experiment was performed at the STARLiTeR apparatus located at Texas A&M Cyclotron Institute using the K150 Cyclotron which produced a 28.56 MeV proton beam. The proton beam impinged on Y and Zr targets to produce the nuclear reactions 89Y(p,d) 88Y and 92Zr(p,d) 91Zr. Both particle singles data and particle-gamma ray coincident data were measured during the experiment. This data was used to determine the γ-ray probability as a function of energymore » for these reactions. The results for the γ-ray probabilities as a function of energy for both these nuclei are documented here. For completeness, extensive tabulated and graphical results are provided in the appendices.« less
NASA Astrophysics Data System (ADS)
Park, J.; Lim, Y. J.; Sung, J. H.; Kang, H. S.
2017-12-01
The widely used meteorological drought index, the Standardized Precipitation Index (SPI) basically assumes stationarity, but recent change in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process has been proposed. The results are evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the shape of probability distribution function wider than before. This understanding implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.
Pólya number and first return of bursty random walk: Rigorous solutions
NASA Astrophysics Data System (ADS)
Wan, J.; Xu, X. P.
2012-03-01
The recurrence properties of random walks can be characterized by Pólya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we investigate Pólya number and first return for bursty random walk on a line, in which the walk has different step size and moving probabilities. Using the concept of the Catalan number, we obtain exact results for first return probability, the average first return time and Pólya number for the first time. We show that Pólya number displays two different functional behavior when the walk deviates from the recurrent point. By utilizing the Lagrange inversion formula, we interpret our findings by transferring Pólya number to the closed-form solutions of an inverse function. We also calculate Pólya number using another approach, which corroborates our results and conclusions. Finally, we consider the recurrence properties and Pólya number of two variations of the bursty random walk model.
Modelling uncertainty with generalized credal sets: application to conjunction and decision
NASA Astrophysics Data System (ADS)
Bronevich, Andrey G.; Rozenberg, Igor N.
2018-01-01
To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.
Molecular analyses of the principal components of response strength.
Killeen, Peter R; Hall, Scott S; Reilly, Mark P; Kettle, Lauren C
2002-01-01
Killeen and Hall (2001) showed that a common factor called strength underlies the key dependent variables of response probability, latency, and rate, and that overall response rate is a good predictor of strength. In a search for the mechanisms that underlie those correlations, this article shows that (a) the probability of responding on a trial is a two-state Markov process; (b) latency and rate of responding can be described in terms of the probability and period of stochastic machines called clocked Bernoulli modules, and (c) one such machine, the refractory Poisson process, provides a functional relation between the probability of observing a response during any epoch and the rate of responding. This relation is one of proportionality at low rates and curvilinearity at higher rates. PMID:12216975
The non-parametric Parzen's window in stereo vision matching.
Pajares, G; de la Cruz, J
2002-01-01
This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.
Search times and probability of detection in time-limited search
NASA Astrophysics Data System (ADS)
Wilson, David; Devitt, Nicole; Maurer, Tana
2005-05-01
When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cikota, Aleksandar; Deustua, Susana; Marleau, Francine, E-mail: acikota@eso.org
We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxymore » center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.« less
Mental disorder in limb reconstruction: Prevalence, associations and impact on work disability.
Rayner, L; Simpson, A; Matcham, F; Shetty, S; Lahoti, O; Groom, G; Hotopf, M
2016-10-01
This cross-sectional survey aimed to assess the prevalence of depression, anxiety, post-traumatic stress disorder (PTSD), and drug and alcohol dependence in a limb reconstruction population and examine associations with demographic and functional variables. As part of routine clinical care, data were collected from 566 patients attending a tertiary referral centre for limb reconstruction between April 2012 and February 2016. Depression, anxiety, post-traumatic stress disorder (PTSD), and alcohol and drug dependence were measured using standardised self-report screening tools. 173 patients (30.6% CI 26.7-34.4) screened positive for at least one of the mental disorders assessed. 110 (19.4% CI 16.2-22.7) met criteria for probable major depression; 112 (19.9% CI 16.6-23.2) patients met criteria for probable generalised anxiety disorder; and 41 (7.6% CI 5.3-9.8) patients met criteria for probable PTSD. The prevalence of probable alcohol dependence and probable drug dependence was 1.6% (CI 0.6-2.7) and 4.5% (CI 2.7-6.3), respectively. Patients who screened positive for depression, anxiety and PTSD reported significantly higher levels of pain, fatigue, and functional impairment. Depression and anxiety were independently associated with work disability after adjustment for covariates (OR 1.98 (CI 1.08-3.62) and OR 1.83 (CI 1.04-3.23), respectively). The high prevalence and adverse associations of probable mental disorder in limb reconstruction attest to the need for routine psychological assessment and support. Integrated screening and management of mental disorder in this population may have a positive impact on patients' emotional, physical and occupational rehabilitation. A randomised controlled trial is needed to test this hypothesis. Copyright © 2016 Elsevier Inc. All rights reserved.
Retrospective validation of renewal-based, medium-term earthquake forecasts
NASA Astrophysics Data System (ADS)
Rotondi, R.
2013-10-01
In this paper, some methods for scoring the performances of an earthquake forecasting probability model are applied retrospectively for different goals. The time-dependent occurrence probabilities of a renewal process are tested against earthquakes of Mw ≥ 5.3 recorded in Italy according to decades of the past century. An aim was to check the capability of the model to reproduce the data by which the model was calibrated. The scoring procedures used can be distinguished on the basis of the requirement (or absence) of a reference model and of probability thresholds. Overall, a rank-based score, information gain, gambling scores, indices used in binary predictions and their loss functions are considered. The definition of various probability thresholds as percentages of the hazard functions allows proposals of the values associated with the best forecasting performance as alarm level in procedures for seismic risk mitigation. Some improvements are then made to the input data concerning the completeness of the historical catalogue and the consistency of the composite seismogenic sources with the hypotheses of the probability model. Another purpose of this study was thus to obtain hints on what is the most influential factor and on the suitability of adopting the consequent changes of the data sets. This is achieved by repeating the estimation procedure of the occurrence probabilities and the retrospective validation of the forecasts obtained under the new assumptions. According to the rank-based score, the completeness appears to be the most influential factor, while there are no clear indications of the usefulness of the decomposition of some composite sources, although in some cases, it has led to improvements of the forecast.
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2014-05-01
We discuss the exit probability of the one-dimensional q-voter model and present tools to obtain estimates about this probability, both through simulations in large networks (around 107 sites) and analytically in the limit where the network is infinitely large. We argue that the result E(ρ )=ρq/ρq+(1-ρ)q, that was found in three previous works [F. Slanina, K. Sznajd-Weron, and P. Przybyła, Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006; R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007, for the case q =2; and P. Przybyła, K. Sznajd-Weron, and M. Tabiszewski, Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117, for q >2] using small networks (around 103 sites), is a good approximation, but there are noticeable deviations that appear even for small systems and that do not disappear when the system size is increased (with the notable exception of the case q =2). We also show that, under some simple and intuitive hypotheses, the exit probability must obey the inequality ρq/ρq+(1-ρ)≤E(ρ)≤ρ/ρ +(1-ρ)q in the infinite size limit. We believe this settles in the negative the suggestion made [S. Galam and A. C. R. Martins, Europhys. Lett. 95, 48005 (2001), 10.1209/0295-5075/95/48005] that this result would be a finite size effect, with the exit probability actually being a step function. We also show how the result that the exit probability cannot be a step function can be reconciled with the Galam unified frame, which was also a source of controversy.
Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.
2003-01-01
These maps present preliminary assessments of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris-flows issuing from basins burned by the Piru, Simi and Verdale Fires of October 2003 in southern California in response to the 25-year, 10-year, and 2-year 1-hour rain storms. The probability maps are based on the application of a logistic multiple regression model that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients and storm rainfall. The peak discharge maps are based on application of a multiple-regression model that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Piru Fire range between 2 and 94% and estimates of debris flow peak discharges range between 1,200 and 6,640 ft3/s (34 to 188 m3/s). Basins burned by the Simi Fire show probabilities for debris-flow occurrence between 1 and 98%, and peak discharge estimates between 1,130 and 6,180 ft3/s (32 and 175 m3/s). The probabilities for debris-flow activity calculated for the Verdale Fire range from negligible values to 13%. Peak discharges were not estimated for this fire because of these low probabilities. These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.
Mestres-Missé, Anna; Trampel, Robert; Turner, Robert; Kotz, Sonja A
2016-04-01
A key aspect of optimal behavior is the ability to predict what will come next. To achieve this, we must have a fairly good idea of the probability of occurrence of possible outcomes. This is based both on prior knowledge about a particular or similar situation and on immediately relevant new information. One question that arises is: when considering converging prior probability and external evidence, is the most probable outcome selected or does the brain represent degrees of uncertainty, even highly improbable ones? Using functional magnetic resonance imaging, the current study explored these possibilities by contrasting words that differ in their probability of occurrence, namely, unbalanced ambiguous words and unambiguous words. Unbalanced ambiguous words have a strong frequency-based bias towards one meaning, while unambiguous words have only one meaning. The current results reveal larger activation in lateral prefrontal and insular cortices in response to dominant ambiguous compared to unambiguous words even when prior and contextual information biases one interpretation only. These results suggest a probability distribution, whereby all outcomes and their associated probabilities of occurrence--even if very low--are represented and maintained.
Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs
2017-01-01
Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package. PMID:29107980
ERIC Educational Resources Information Center
Dracobly, Joseph D.; Smith, Richard G.
2012-01-01
This multiple-study experiment evaluated the utility of assessing and treating severe self-injurious behavior (SIB) based on the outcomes of a functional analysis of precursor behavior. In Study 1, a precursor to SIB was identified using descriptive assessment and conditional probability analyses. In Study 2, a functional analysis of precursor…
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Flood Frequency Analyses Using a Modified Stochastic Storm Transposition Method
NASA Astrophysics Data System (ADS)
Fang, N. Z.; Kiani, M.
2015-12-01
Research shows that areas with similar topography and climatic environment have comparable precipitation occurrences. Reproduction and realization of historical rainfall events provide foundations for frequency analysis and the advancement of meteorological studies. Stochastic Storm Transposition (SST) is a method for such a purpose and enables us to perform hydrologic frequency analyses by transposing observed historical storm events to the sites of interest. However, many previous studies in SST reveal drawbacks from simplified Probability Density Functions (PDFs) without considering restrictions for transposing rainfalls. The goal of this study is to stochastically examine the impacts of extreme events on all locations in a homogeneity zone. Since storms with the same probability of occurrence on homogenous areas do not have the identical hydrologic impacts, the authors utilize detailed precipitation parameters including the probability of occurrence of certain depth and the number of occurrence of extreme events, which are both incorporated into a joint probability function. The new approach can reduce the bias from uniformly transposing storms which erroneously increases the probability of occurrence of storms in areas with higher rainfall depths. This procedure is iterated to simulate storm events for one thousand years as the basis for updating frequency analysis curves such as IDF and FFA. The study area is the Upper Trinity River watershed including the Dallas-Fort Worth metroplex with a total area of 6,500 mi2. It is the first time that SST method is examined in such a wide scale with 20 years of radar rainfall data.
NASA Astrophysics Data System (ADS)
Kim, Seokpum; Wei, Yaochi; Horie, Yasuyuki; Zhou, Min
2018-05-01
The design of new materials requires establishment of macroscopic measures of material performance as functions of microstructure. Traditionally, this process has been an empirical endeavor. An approach to computationally predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs) using mesoscale simulations is developed. The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific mechanisms tracked include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to directly mimic relevant experiments for quantification of statistical variations of material behavior due to inherent material heterogeneities. The particular thresholds and ignition probabilities predicted are expressed in James type and Walker-Wasley type relations, leading to the establishment of explicit analytical expressions for the ignition probability as function of loading. Specifically, the ignition thresholds corresponding to any given level of ignition probability and ignition probability maps are predicted for PBX 9404 for the loading regime of Up = 200-1200 m/s where Up is the particle speed. The predicted results are in good agreement with available experimental measurements. A parametric study also shows that binder properties can significantly affect the macroscopic ignition behavior of PBXs. The capability to computationally predict the macroscopic engineering material response relations out of material microstructures and basic constituent and interfacial properties lends itself to the design of new materials as well as the analysis of existing materials.