NASA Astrophysics Data System (ADS)
Koshinchanov, Georgy; Dimitrov, Dobri
2008-11-01
The characteristics of rainfall intensity are important for many purposes, including design of sewage and drainage systems, tuning flood warning procedures, etc. Those estimates are usually statistical estimates of the intensity of precipitation realized for certain period of time (e.g. 5, 10 min., etc) with different return period (e.g. 20, 100 years, etc). The traditional approach in evaluating the mentioned precipitation intensities is to process the pluviometer's records and fit probability distribution to samples of intensities valid for certain locations ore regions. Those estimates further become part of the state regulations to be used for various economic activities. Two problems occur using the mentioned approach: 1. Due to various factors the climate conditions are changed and the precipitation intensity estimates need regular update; 2. As far as the extremes of the probability distribution are of particular importance for the practice, the methodology of the distribution fitting needs specific attention to those parts of the distribution. The aim of this paper is to make review of the existing methodologies for processing the intensive rainfalls and to refresh some of the statistical estimates for the studied areas. The methodologies used in Bulgaria for analyzing the intensive rainfalls and produce relevant statistical estimates: The method of the maximum intensity, used in the National Institute of Meteorology and Hydrology to process and decode the pluviometer's records, followed by distribution fitting for each precipitation duration period; As the above, but with separate modeling of probability distribution for the middle and high probability quantiles. Method is similar to the first one, but with a threshold of 0,36 mm/min of intensity; Another method proposed by the Russian hydrologist G. A. Aleksiev for regionalization of estimates over some territory, improved and adapted by S. Gerasimov for Bulgaria; Next method is considering only the intensive rainfalls (if any) during the day with the maximal annual daily precipitation total for a given year; Conclusions are drown on the relevance and adequacy of the applied methods.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi
2003-01-13
We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Modeling the probability distribution of peak discharge for infiltrating hillslopes
NASA Astrophysics Data System (ADS)
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.
Superthermal photon bunching in terms of simple probability distributions
NASA Astrophysics Data System (ADS)
Lettau, T.; Leymann, H. A. M.; Melcher, B.; Wiersig, J.
2018-05-01
We analyze the second-order photon autocorrelation function g(2 ) with respect to the photon probability distribution and discuss the generic features of a distribution that results in superthermal photon bunching [g(2 )(0 ) >2 ]. Superthermal photon bunching has been reported for a number of optical microcavity systems that exhibit processes such as superradiance or mode competition. We show that a superthermal photon number distribution cannot be constructed from the principle of maximum entropy if only the intensity and the second-order autocorrelation are given. However, for bimodal systems, an unbiased superthermal distribution can be constructed from second-order correlations and the intensities alone. Our findings suggest modeling superthermal single-mode distributions by a mixture of a thermal and a lasinglike state and thus reveal a generic mechanism in the photon probability distribution responsible for creating superthermal photon bunching. We relate our general considerations to a physical system, i.e., a (single-emitter) bimodal laser, and show that its statistics can be approximated and understood within our proposed model. Furthermore, the excellent agreement of the statistics of the bimodal laser and our model reveals that the bimodal laser is an ideal source of bunched photons, in the sense that it can generate statistics that contain no other features but the superthermal bunching.
Influence of pitting defects on quality of high power laser light field
NASA Astrophysics Data System (ADS)
Ren, Huan; Zhang, Lin; Yang, Yi; Shi, Zhendong; Ma, Hua; Jiang, Hongzhen; Chen, Bo; Yang, XiaoYu; Zheng, Wanguo; Zhu, Rihong
2018-01-01
With the split-step-Fourier-transform method for solving the nonlinear paraxial wave equation, the intensity distribution of the light field when the pits diameter or depth change is obtained by using numerical simulation, include the intensity distribution inside optical element, the beam near-field, the different distances behind the element and the beam far-field. Results show that with the increase of pits diameter or depth, the light field peak intensity and the contrast inside of element corresponding enhancement. The contrast of the intensity distribution of the rear surface of the element will increase slightly. The peak intensity produced by a specific location element downstream of thermal effect will continue to increase, the damage probability in optics placed here is greatly increased. For the intensity distribution of the far-field, increase the pitting diameter or depth will cause the focal spot intensity distribution changes, and the energy of the spectrum center region increase constantly. This work provide a basis for quantitative design and inspection for pitting defects, which provides a reference for the design of optical path arrangement.
q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations
NASA Astrophysics Data System (ADS)
Katz, Yuri A.; Tian, Li
2013-10-01
We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1
NASA Astrophysics Data System (ADS)
Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan
2017-10-01
Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.
Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions
ERIC Educational Resources Information Center
Vuolo, Mike
2017-01-01
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P; Gee, James C
2009-01-01
We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities.
Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P.; Gee, James C.
2013-01-01
We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities. PMID:20426191
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
NASA Astrophysics Data System (ADS)
Phillips, R. C.; Samadi, S. Z.; Meadows, M. E.
2018-07-01
This paper examines the frequency, distribution tails, and peak-over-threshold (POT) of extreme floods through analysis that centers on the October 2015 flooding in North Carolina (NC) and South Carolina (SC), United States (US). The most striking features of the October 2015 flooding were a short time to peak (Tp) and a multi-hour continuous flood peak which caused intensive and widespread damages to human lives, properties, and infrastructure. The 2015 flooding was produced by a sequence of intense rainfall events which originated from category 4 hurricane Joaquin over a period of four days. Here, the probability distribution and distribution parameters (i.e., location, scale, and shape) of floods were investigated by comparing the upper part of empirical distributions of the annual maximum flood (AMF) and POT with light- to heavy- theoretical tails: Fréchet, Pareto, Gumbel, Weibull, Beta, and Exponential. Specifically, four sets of U.S. Geological Survey (USGS) gauging data from the central Carolinas with record lengths from approximately 65-125 years were used. Analysis suggests that heavier-tailed distributions are in better agreement with the POT and somewhat AMF data than more often used exponential (light) tailed probability distributions. Further, the threshold selection and record length affect the heaviness of the tail and fluctuations of the parent distributions. The shape parameter and its evolution in the period of record play a critical and poorly understood role in determining the scaling of flood response to intense rainfall.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss
NASA Astrophysics Data System (ADS)
Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.
2012-12-01
3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the wave databases. In this study we use a temporal correlation of ~1 hour for the sampled wave intensities that is informed by the observed autocorrelation in the AE index, a spatial correlation length of ~100 km in the two directions perpendicular to the magnetic field, and a spatial correlation length of 5000 km in the direction parallel to the magnetic field, according to the work of Santolik et al (2003), who used multi-spacecraft measurements from Cluster to quantify the correlation length scales for equatorial chorus . We find that, despite the small correlation length scale for chorus, there remains significant variability in the model outcomes driven by variability in the chorus wave intensities.
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
Novotny, A.J.
1960-01-01
The one factor which probably contributes the greatest effect on distributional patterns of Anisakis within chum salmon musculature is the total intensity of infection (or population density of Anisakis) in each fish.
Modeling the Dependency Structure of Integrated Intensity Processes
Ma, Yong-Ki
2015-01-01
This paper studies an important issue of dependence structure. To model this structure, the intensities within the Cox processes are driven by dependent shot noise processes, where jumps occur simultaneously and their sizes are correlated. The joint survival probability of the integrated intensities is explicitly obtained from the copula with exponential marginal distributions. Subsequently, this result can provide a very useful guide for credit risk management. PMID:26270638
NASA Astrophysics Data System (ADS)
Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.
2017-12-01
A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.
Scaling and clustering effects of extreme precipitation distributions
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Li, Jianfeng
2012-08-01
SummaryOne of the impacts of climate change and human activities on the hydrological cycle is the change in the precipitation structure. Closely related to the precipitation structure are two characteristics: the volume (m) of wet periods (WPs) and the time interval between WPs or waiting time (t). Using daily precipitation data for a period of 1960-2005 from 590 rain gauge stations in China, these two characteristics are analyzed, involving scaling and clustering of precipitation episodes. Our findings indicate that m and t follow similar probability distribution curves, implying that precipitation processes are controlled by similar underlying thermo-dynamics. Analysis of conditional probability distributions shows a significant dependence of m and t on their previous values of similar volumes, and the dependence tends to be stronger when m is larger or t is longer. It indicates that a higher probability can be expected when high-intensity precipitation is followed by precipitation episodes with similar precipitation intensity and longer waiting time between WPs is followed by the waiting time of similar duration. This result indicates the clustering of extreme precipitation episodes and severe droughts or floods are apt to occur in groups.
Statistics of Optical Coherence Tomography Data From Human Retina
de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo
2010-01-01
Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733
Gidoin, Cynthia; Avelino, Jacques; Deheuvels, Olivier; Cilas, Christian; Bieng, Marie Ange Ngo
2014-03-01
Vegetation composition and plant spatial structure affect disease intensity through resource and microclimatic variation effects. The aim of this study was to evaluate the independent effect and relative importance of host composition and plant spatial structure variables in explaining disease intensity at the plot scale. For that purpose, frosty pod rot intensity, a disease caused by Moniliophthora roreri on cacao pods, was monitored in 36 cacao agroforests in Costa Rica in order to assess the vegetation composition and spatial structure variables conducive to the disease. Hierarchical partitioning was used to identify the most causal factors. Firstly, pod production, cacao tree density and shade tree spatial structure had significant independent effects on disease intensity. In our case study, the amount of susceptible tissue was the most relevant host composition variable for explaining disease intensity by resource dilution. Indeed, cacao tree density probably affected disease intensity more by the creation of self-shading rather than by host dilution. Lastly, only regularly distributed forest trees, and not aggregated or randomly distributed forest trees, reduced disease intensity in comparison to plots with a low forest tree density. A regular spatial structure is probably crucial to the creation of moderate and uniform shade as recommended for frosty pod rot management. As pod production is an important service expected from these agroforests, shade tree spatial structure may be a lever for integrated management of frosty pod rot in cacao agroforests.
Mass and angular distributions of the reaction products in heavy ion collisions
NASA Astrophysics Data System (ADS)
Nasirov, A. K.; Giardina, G.; Mandaglio, G.; Kayumov, B. M.; Tashkhodjaev, R. B.
2018-05-01
The optimal reactions and beam energies leading to synthesize superheavy elements is searched by studying mass and angular distributions of fission-like products in heavy-ion collisions since the evaporation residue cross section consists an ignorable small part of the fusion cross section. The intensity of the yield of fission-like products allows us to estimate the probability of the complete fusion of the interacting nuclei. The overlap of the mass and angular distributions of the fusion-fission and quasifission products causes difficulty at estimation of the correct value of the probability of the compound nucleus formation. A study of the mass and angular distributions of the reaction products is suitable key to understand the interaction mechanism of heavy ion collisions.
An updated climatology of explosive cyclones using alternative measures of cyclone intensity
NASA Astrophysics Data System (ADS)
Hanley, J.; Caballero, R.
2009-04-01
Using a novel cyclone tracking and identification method, we compute a climatology of explosively intensifying cyclones or ‘bombs' using the ERA-40 and ERA-Interim datasets. Traditionally, ‘bombs' have been identified using a central pressure deepening rate criterion (Sanders and Gyakum, 1980). We investigate alternative methods of capturing such extreme cyclones. These methods include using the maximum wind contained within the cyclone, and using a potential vorticity column measure within such systems, as a measure of intensity. Using the different measures of cyclone intensity, we construct and intercompare maps of peak cyclone intensity. We also compute peak intensity probability distributions, and assess the evidence for the bi-modal distribution found by Roebber (1984). Finally, we address the question of the relationship between storm intensification rate and storm destructiveness: are ‘bombs' the most destructive storms?
Banerjee, Abhirup; Maji, Pradipta
2015-12-01
The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.
Das, Biman; Drake, Eli; Jack, John
2004-02-01
Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Relating Convective and Stratiform Rain to Latent Heating
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Lang, Stephen; Zeng, Xiping; Shige, Shoichi; Takayabu, Yukari
2010-01-01
The relationship among surface rainfall, its intensity, and its associated stratiform amount is established by examining observed precipitation data from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR). The results show that for moderate-high stratiform fractions, rain probabilities are strongly skewed toward light rain intensities. For convective-type rain, the peak probability of occurrence shifts to higher intensities but is still significantly skewed toward weaker rain rates. The main differences between the distributions for oceanic and continental rain are for heavily convective rain. The peak occurrence, as well as the tail of the distribution containing the extreme events, is shifted to higher intensities for continental rain. For rainy areas sampled at 0.58 horizontal resolution, the occurrence of conditional rain rates over 100 mm/day is significantly higher over land. Distributions of rain intensity versus stratiform fraction for simulated precipitation data obtained from cloud-resolving model (CRM) simulations are quite similar to those from the satellite, providing a basis for mapping simulated cloud quantities to the satellite observations. An improved convective-stratiform heating (CSH) algorithm is developed based on two sources of information: gridded rainfall quantities (i.e., the conditional intensity and the stratiform fraction) observed from the TRMM PR and synthetic cloud process data (i.e., latent heating, eddy heat flux convergence, and radiative heating/cooling) obtained from CRM simulations of convective cloud systems. The new CSH algorithm-derived heating has a noticeably different heating structure over both ocean and land regions compared to the previous CSH algorithm. Major differences between the new and old algorithms include a significant increase in the amount of low- and midlevel heating, a downward emphasis in the level of maximum cloud heating by about 1 km, and a larger variance between land and ocean in the new CSH algorithm.
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
NASA Astrophysics Data System (ADS)
Gómez, Wilmar
2017-04-01
By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.
Kumar, Sanjeev; Karmeshu
2018-04-01
A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.
Electron microprobe analysis program for biological specimens: BIOMAP
NASA Technical Reports Server (NTRS)
Edwards, B. F.
1972-01-01
BIOMAP is a Univac 1108 compatible program which facilitates the electron probe microanalysis of biological specimens. Input data are X-ray intensity data from biological samples, the X-ray intensity and composition data from a standard sample and the electron probe operating parameters. Outputs are estimates of the weight percentages of the analyzed elements, the distribution of these estimates for sets of red blood cells and the probabilities for correlation between elemental concentrations. An optional feature statistically estimates the X-ray intensity and residual background of a principal standard relative to a series of standards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medvedev, Emile S., E-mail: esmedved@orc.ru; Meshkov, Vladimir V.; Stolyarov, Andrey V.
In the recent work devoted to the calculation of the rovibrational line list of the CO molecule [G. Li et al., Astrophys. J., Suppl. Ser. 216, 15 (2015)], rigorous validation of the calculated parameters including intensities was carried out. In particular, the Normal Intensity Distribution Law (NIDL) [E. S. Medvedev, J. Chem. Phys. 137, 174307 (2012)] was employed for the validation purposes, and it was found that, in the original CO line list calculated for large changes of the vibrational quantum number up to Δn = 41, intensities with Δn > 11 were unphysical. Therefore, very high overtone transitions weremore » removed from the published list in Li et al. Here, we show how this type of validation is carried out and prove that the quadruple precision is indispensably required to predict the reliable intensities using the conventional 32-bit computers. Based on these calculations, the NIDL is shown to hold up for the 0 → n transitions till the dissociation limit around n = 83, covering 45 orders of magnitude in the intensity. The low-intensity 0 → n transition predicted in the work of Medvedev [Determination of a new molecular constant for diatomic systems. Normal intensity distribution law for overtone spectra of diatomic and polyatomic molecules and anomalies in overtone absorption spectra of diatomic molecules, Institute of Chemical Physics, Russian Academy of Sciences, Chernogolovka, 1984] at n = 5 is confirmed, and two additional “abnormal” intensities are found at n = 14 and 23. Criteria for the appearance of such “anomalies” are formulated. The results could be useful to revise the high-overtone molecular transition probabilities provided in spectroscopic databases.« less
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
NASA Astrophysics Data System (ADS)
Naine, Tarun Bharath; Gundawar, Manoj Kumar
2017-09-01
We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.
NASA Astrophysics Data System (ADS)
Strasser, M.; Moernaut, J.; Van Daele, M. E.; De Batist, M. A. O.
2017-12-01
Coastal paleoseismic records in south-central Chile indicate that giant megathrust earthquakes -such as in AD1960 (Mw9.5)- occur on average every 300 yrs. Based on geodetic data, it was postulated that the area already has the potential for a Mw8 earthquake. However, to estimate the probability for such a great earthquake from a paleo-perspective, one needs to reconstruct the long-term recurrence pattern of megathrust earthquakes. Here, we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800 yrs. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII½ vs. ≥VI½) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII½) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI½ (139 ±69 yrs). The coefficient of variation (CoV) of inter-event times indicate that the strongest earthquakes recur in a quasi-periodic way (CoV: 0.32) and follow a normal distribution. Including also "smaller" earthquakes (Intensity down to VI½) increases the CoV (0.5) and fits best with a Weibull distribution. Regional correlation of our multi-threshold shaking records with coastal records of tsunami and coseismic subsidence suggests that the intensity ≥VII½ events repeatedly ruptured the same part of the megathrust over a distance of at least 300 km and can be assigned to a Mw ≥ 8.6. We hypothesize that a zone of high plate locking -identified by GPS data and large slip in AD 1960- acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. For the next 110 yrs, we infer an enhanced probability for a Mw 7.7-8.5 earthquake whereas the probability for a Mw ≥ 8.6 (AD1960-like) earthquake remains low.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2017-03-01
We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.
Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities
NASA Astrophysics Data System (ADS)
Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.
A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).
Multi-beam transmitter geometries for free-space optical communications
NASA Astrophysics Data System (ADS)
Tellez, Jason A.; Schmidt, Jason D.
2010-02-01
Free-space optical communications systems provide the opportunity to take advantage of higher data transfer rates and lower probability of intercept compared to radio-frequency communications. However, propagation through atmospheric turbulence, such as for airborne laser communication over long paths, results in intensity variations at the receiver and a corresponding degradation in bit error rate (BER) performance. Previous literature has shown that two transmitters, when separated sufficiently, can effectively average out the intensity varying effects of the atmospheric turbulence at the receiver. This research explores the impacts of adding more transmitters and the marginal reduction in the probability of signal fades while minimizing the overall transmitter footprint, an important design factor when considering an airborne communications system. Analytical results for the cumulative distribution function are obtained for tilt-only results, while wave-optics simulations are used to simulate the effects of scintillation. These models show that the probability of signal fade is reduced as the number of transmitters is increased.
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
Segmentation and intensity estimation of microarray images using a gamma-t mixture model.
Baek, Jangsun; Son, Young Sook; McLachlan, Geoffrey J
2007-02-15
We present a new approach to the analysis of images for complementary DNA microarray experiments. The image segmentation and intensity estimation are performed simultaneously by adopting a two-component mixture model. One component of this mixture corresponds to the distribution of the background intensity, while the other corresponds to the distribution of the foreground intensity. The intensity measurement is a bivariate vector consisting of red and green intensities. The background intensity component is modeled by the bivariate gamma distribution, whose marginal densities for the red and green intensities are independent three-parameter gamma distributions with different parameters. The foreground intensity component is taken to be the bivariate t distribution, with the constraint that the mean of the foreground is greater than that of the background for each of the two colors. The degrees of freedom of this t distribution are inferred from the data but they could be specified in advance to reduce the computation time. Also, the covariance matrix is not restricted to being diagonal and so it allows for nonzero correlation between R and G foreground intensities. This gamma-t mixture model is fitted by maximum likelihood via the EM algorithm. A final step is executed whereby nonparametric (kernel) smoothing is undertaken of the posterior probabilities of component membership. The main advantages of this approach are: (1) it enjoys the well-known strengths of a mixture model, namely flexibility and adaptability to the data; (2) it considers the segmentation and intensity simultaneously and not separately as in commonly used existing software, and it also works with the red and green intensities in a bivariate framework as opposed to their separate estimation via univariate methods; (3) the use of the three-parameter gamma distribution for the background red and green intensities provides a much better fit than the normal (log normal) or t distributions; (4) the use of the bivariate t distribution for the foreground intensity provides a model that is less sensitive to extreme observations; (5) as a consequence of the aforementioned properties, it allows segmentation to be undertaken for a wide range of spot shapes, including doughnut, sickle shape and artifacts. We apply our method for gridding, segmentation and estimation to cDNA microarray real images and artificial data. Our method provides better segmentation results in spot shapes as well as intensity estimation than Spot and spotSegmentation R language softwares. It detected blank spots as well as bright artifact for the real data, and estimated spot intensities with high-accuracy for the synthetic data. The algorithms were implemented in Matlab. The Matlab codes implementing both the gridding and segmentation/estimation are available upon request. Supplementary material is available at Bioinformatics online.
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Gaussian statistics for palaeomagnetic vectors
NASA Astrophysics Data System (ADS)
Love, J. J.; Constable, C. G.
2003-03-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Worden, C.B.; Wald, David J.; Rhoades, D.A.
2012-01-01
We use a database of approximately 200,000 modified Mercalli intensity (MMI) observations of California earthquakes collected from USGS "Did You Feel It?" (DYFI) reports, along with a comparable number of peak ground-motion amplitudes from California seismic networks, to develop probabilistic relationships between MMI and peak ground velocity (PGV), peak ground acceleration (PGA), and 0.3-s, 1-s, and 3-s 5% damped pseudospectral acceleration (PSA). After associating each ground-motion observation with an MMI computed from all the DYFI responses within 2 km of the observation, we derived a joint probability distribution between MMI and ground motion. We then derived reversible relationships between MMI and each ground-motion parameter by using a total least squares regression to fit a bilinear function to the median of the stacked probability distributions. Among the relationships, the fit to peak ground velocity has the smallest errors, though linear combinations of PGA and PGV give nominally better results. We also find that magnitude and distance terms reduce the overall residuals and are justifiable on an information theoretic basis. For intensities MMI≥5, our results are in close agreement with the relations of Wald, Quitoriano, Heaton, and Kanamori (1999); for lower intensities, our results fall midway between Wald, Quitoriano, Heaton, and Kanamori (1999) and those of Atkinson and Kaka (2007). The earthquakes in the study ranged in magnitude from 3.0 to 7.3, and the distances ranged from less than a kilometer to about 400 km from the source.
Mao, Chen-Chen; Zhou, Xing-Yu; Zhu, Jian-Rong; Zhang, Chun-Hui; Zhang, Chun-Mei; Wang, Qin
2018-05-14
Recently Zhang et al [ Phys. Rev. A95, 012333 (2017)] developed a new approach to estimate the failure probability for the decoy-state BB84 QKD system when taking finite-size key effect into account, which offers security comparable to Chernoff bound, while results in an improved key rate and transmission distance. Based on Zhang et al's work, now we extend this approach to the case of the measurement-device-independent quantum key distribution (MDI-QKD), and for the first time implement it onto the four-intensity decoy-state MDI-QKD system. Moreover, through utilizing joint constraints and collective error-estimation techniques, we can obviously increase the performance of practical MDI-QKD systems compared with either three- or four-intensity decoy-state MDI-QKD using Chernoff bound analysis, and achieve much higher level security compared with those applying Gaussian approximation analysis.
NASA Astrophysics Data System (ADS)
Ulanov, S. F.
1990-06-01
A method proposed for investigating the statistics of bulk optical breakdown relies on multifrequency lasers, which eliminates the influence of the laser radiation intensity statistics. The method is based on preliminary recording of the peak intensity statistics of multifrequency laser radiation pulses at the caustic using the optical breakdown threshold of K8 glass. The probability density distribution function was obtained at the focus for the peak intensities of the radiation pulses of a multifrequency laser. This method may be used to study the self-interaction under conditions of bulk optical breakdown of transparent dielectrics.
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
Assessing a Tornado Climatology from Global Tornado Intensity Distributions.
NASA Astrophysics Data System (ADS)
Feuerstein, Bernold; Dotzek, Nikolai; Grieser, Jürgen
2005-02-01
Recent work demonstrated that the shape of tornado intensity distributions from various regions worldwide is well described by Weibull functions. This statistical modeling revealed a strong correlation between the fit parameters c for shape and b for scale regardless of the data source. In the present work it is shown that the quality of the Weibull fits is optimized if only tornado reports of F1 and higher intensity are used and that the c-b correlation does indeed reflect a universal feature of the observed tornado intensity distributions. For regions with likely supercell tornado dominance, this feature is the number ratio of F4 to F3 tornado reports R(F4/F3) = 0.238. The c-b diagram for the Weibull shape and scale parameters is used as a climatological chart, which allows different types of tornado climatology to be distinguished, presumably arising from supercell versus nonsupercell tornadogenesis. Assuming temporal invariance of the climatology and using a detection efficiency function for tornado observations, a stationary climatological probability distribution from large tornado records (U.S. decadal data 1950-99) is extracted. This can be used for risk assessment, comparative studies on tornado intensity distributions worldwide, and estimates of the degree of underreporting for areas with poor databases. For the 1990s U.S. data, a likely tornado underreporting of the weak events (F0, F1) by a factor of 2 can be diagnosed, as well as asymptotic climatological c,b values of c = 1.79 and b = 2.13, to which a convergence in the 1950-99 U.S. decadal data is verified.
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
Flavor Identification and Intensity: Effects of Stimulus Context
Hallowell, Emily S.; Parikh, Roshan; Veldhuizen, Maria G.
2016-01-01
Two experiments presented oral mixtures containing different proportions of the gustatory flavorant sucrose and an olfactory flavorant, either citral (Experiment 1) or lemon (Experiment 2). In 4 different sessions of each experiment, subjects identified each mixture as “mostly sugar” or “mostly citrus/lemon” or rated the perceived intensities of the sweet and citrus components. Different sessions also presented the mixtures in different contexts, with mixtures containing relatively high concentrations of sucrose or citral/lemon presented more often (skew sucrose or skew citral/lemon). As expected, in both experiments, varying stimulus context affected both identification and perceived intensity: Skewing to sucrose versus citral/lemon decreased the probability of identifying the stimuli as “mostly sugar” and reduced the ratings of sweet intensity relative to citrus intensity. Across both contextual conditions of both experiments, flavor identification associated closely with the ratio of the perceived sweet and citrus intensities. The results accord with a model, extrapolated from signal-detection theory, in which sensory events are represented as multisensory–multidimensional distributions in perceptual space. Changing stimulus context can shift the locations of the distributions relative to response criteria, Decision rules guide judgments based on both sensory events and criteria, these rules not necessarily being identical in tasks of identification and intensity rating. PMID:26830499
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
NASA Astrophysics Data System (ADS)
Corrêa, E. L.; Silva, J. O.; Vivolo, V.; Potiens, M. P. A.; Daros, K. A. C.; Medeiros, R. B.
2014-02-01
This study presents the results of the intensity variation of the radiation field in a mammographic system using the thermoluminescent dosimeter TLD-900 (CaSO4:Dy). These TLDs were calibrated and characterized in an industrial X-ray system used for instruments calibration, in the energy range used in mammography. They were distributed in a matrix of 19 lines and five columns, covering an area of 18 cm×8 cm in the center of the radiation field on the clinical equipment. The results showed a variation of the intensity probably explained by the non-uniformity of the field due to the heel effect.
Delay-induced stochastic bifurcations in a bistable system under white noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Zhongkui, E-mail: sunzk@nwpu.edu.cn; Fu, Jin; Xu, Wei
2015-08-15
In this paper, the effects of noise and time delay on stochastic bifurcations are investigated theoretically and numerically in a time-delayed Duffing-Van der Pol oscillator subjected to white noise. Due to the time delay, the random response is not Markovian. Thereby, approximate methods have been adopted to obtain the Fokker-Planck-Kolmogorov equation and the stationary probability density function for amplitude of the response. Based on the knowledge that stochastic bifurcation is characterized by the qualitative properties of the steady-state probability distribution, it is found that time delay and feedback intensity as well as noise intensity will induce the appearance of stochasticmore » P-bifurcation. Besides, results demonstrated that the effects of the strength of the delayed displacement feedback on stochastic bifurcation are accompanied by the sensitive dependence on time delay. Furthermore, the results from numerical simulations best confirm the effectiveness of the theoretical analyses.« less
Cueing properties of the decrease of white noise intensity for avoidance conditioning in cats.
Zieliński, K
1979-01-01
In the main experiment two groups of 6 cats each were trained in active bar-pressing avoidance to a CS consisting of either a 10 dB or 20 dB decrease of the background white noise of 70 dB intensity. The two groups did not differ in rapidity of learning, however cats trained to the greater change .in background noise performed avoidance responses with shorter latencies than did cats trained to smaller change. Within-groups comparisons of cumulative distributions of response latencies for consecutive Vincentized fifths of avoidance acquisition showed the greatest changes in the region of latencies longer than the median latency of instrumental responses. On the other hand, the effects of CS intensity found in between-groups comparisons were located in the region of latencies shorter than the median latency of either group. Comparisons with data obtained in a complementary experiment employing additional 17 cats showed that subjects trained to stimuli less intense than the background noise level were marked by an exceptionally low level of avoidance responding with latencies shorter than 1.1 s, which was lower than expected from the probability of intertrial responses for this period of time. Due to this property of stimuli less intense than the background, the distributions of response latencies were moved to the right, in effect, prefrontal lesions influenced a greater part of latency distributions than in cats trained to stimuli more intense than the background.
Remote sensing of mesospheric electric fields using MF radars
NASA Astrophysics Data System (ADS)
Meek, C. E.; Manson, A. H.; Martynenko, S. I.; Rozumenko, V. T.; Tyrnov, O. F.
2004-07-01
Large mesospheric electric fields can play an essential role in middle atmospheric electrodynamics (see, e.g., Goldberg, R. A., Middle Atmospheric Electrodynamics during MAP, Adv. Space Res. 10 (10) (1990) 209). The V/m electric fields of atmospheric origin can be the possible cause of large variations in the electron collision frequency at mesospheric altitudes, and this provides a unique opportunity to take measurements of electric fields in the lower ionosphere by using remote sensing instruments employing radiowave techniques. A technique has been proposed for making estimates of large mesospheric electric field intensities on the lower edge of the ionosphere by using MF radar data and the inherent effective electron collision frequency. To do this, data collected in Canada and Ukraine were utilized. The developed technique permits the changes in mesospheric electric field intensities to be derived from MF radar data in real time. The statistical analysis of data consistent with large mesospheric electric field intensities in the 60-67km region resulted in the following inferences. There are at least two mechanisms for the generation of large mesospheric electric fields in the mesosphere. The most likely mechanism, with a probability of 60-70%, is the summation of random fields from a large number of elementary small-scale mesospheric generators, which results in a one-parameter Rayleigh distribution of the total large mesospheric electric field intensity E with a mean value of approximately 0.7-0.9V/m in the 60-67km altitude region, or in the corresponding one-parameter exponential distribution of the intensity squared E2 of large mesospheric electric fields. The second mechanism of unknown nature, with 5-15% probability, gives rise to the sporadic appearance of large mesospheric electric field intensities E>2.5V/m with a mean of 4V/m. Statistically significant seasonal differences in the averaged large mesospheric electric field parameters have not been revealed. The probability of the absence of local large mesospheric electric fields amounts to approximately 25% for Ukraine and approximately 30% for Canada. A comparison of the Ukrainian and Canadian data indicates the possible existence of a latitudinal dependence in mean large mesospheric electric field features. Hence, the large electric fields are an additional source of electron heating that must be taken into account in studying a disturbed lower ionosphere and radio wave propagation within it.
NASA Astrophysics Data System (ADS)
Jun, Changhyun; Qin, Xiaosheng; Gan, Thian Yew; Tung, Yeou-Koung; De Michele, Carlo
2017-10-01
This study presents a storm-event based bivariate frequency analysis approach to determine design rainfalls in which, the number, intensity and duration of actual rainstorm events were considered. To derive more realistic design storms, the occurrence probability of an individual rainstorm event was determined from the joint distribution of storm intensity and duration through a copula model. Hourly rainfall data were used at three climate stations respectively located in Singapore, South Korea and Canada. It was found that the proposed approach could give a more realistic description of rainfall characteristics of rainstorm events and design rainfalls. As results, the design rainfall quantities from actual rainstorm events at the three studied sites are consistently lower than those obtained from the conventional rainfall depth-duration-frequency (DDF) method, especially for short-duration storms (such as 1-h). It results from occurrence probabilities of each rainstorm event and a different angle for rainfall frequency analysis, and could offer an alternative way of describing extreme rainfall properties and potentially help improve the hydrologic design of stormwater management facilities in urban areas.
NASA Astrophysics Data System (ADS)
Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu
2017-10-01
Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.
NASA Astrophysics Data System (ADS)
Eliseev, A. A.; Gorozhankin, D. F.; Napolskii, K. S.; Petukhov, A. V.; Sapoletova, N. A.; Vasilieva, A. V.; Grigoryeva, N. A.; Mistonov, A. A.; Byelov, D. V.; Bouwman, W. G.; Kvashnina, K. O.; Chernyshov, D. Yu.; Bosak, A. A.; Grigoriev, S. V.
2009-10-01
The distribution of the scattering intensity in the reciprocal space for natural and artificial opals has been reconstructed from a set of small-angle X-ray diffraction patterns. The resulting three-dimensional intensity maps are used to analyze the defect structure of opals. The structure of artificial opals can be satisfactorily described in the Wilson probability model with the prevalence of layers in the fcc environment. The diffraction patterns observed for a natural opal confirm the presence of sufficiently long unequally occupied fcc domains.
PHOTOTROPISM OF GERMINATING MYCELIA OF SOME PARASITIC FUNGI
uredinales on young wheat plants; Distribution and significance of the phototropism of germinating mycelia -- confirmation of older data, examination of...eight additional uredinales, probable meaning of negative phototropism for the occurrence of infection; Analysis of the stimulus physiology of the...reaction -- the minimum effective illumination intensity, the effective special region, inversion of the phototropic reaction in liquid paraffin, the negative light- growth reaction, the light-sensitive zone.
Analyzing wildfire exposure and source–sink relationships on a fire prone forest landscape
Alan A. Ager; Nicole M. Vaillant; Mark A. Finney; Haiganoush K. Preisler
2012-01-01
We used simulation modeling to analyze wildfire exposure to social and ecological values on a 0.6 million ha national forest in central Oregon, USA. We simulated 50,000 wildfires that replicated recent fire events in the area and generated detailed maps of burn probability (BP) and fire intensity distributions. We also recorded the ignition locations and size of each...
Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J
2016-12-01
This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.
Econophysics: Two-phase behaviour of financial markets
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Stanley, H. Eugene
2003-01-01
Buying and selling in financial markets is driven by demand, which can be quantified by the imbalance in the number of shares transacted by buyers and sellers over a given time interval. Here we analyse the probability distribution of demand, conditioned on its local noise intensity Σ, and discover the surprising existence of a critical threshold, Σc. For Σ < Σc, the most probable value of demand is roughly zero; we interpret this as an equilibrium phase in which neither buying nor selling predominates. For Σ > Σc, two most probable values emerge that are symmetrical around zero demand, corresponding to excess demand and excess supply; we interpret this as an out-of-equilibrium phase in which the market behaviour is mainly buying for half of the time, and mainly selling for the other half.
Effects of urbanization on carnivore species distribution and richness
Ordenana, Miguel A.; Crooks, Kevin R.; Boydston, Erin E.; Fisher, Robert N.; Lyren, Lisa M.; Siudyla, Shalene; Haas, Christopher D.; Harris, Sierra; Hathaway, Stacie A.; Turschak, Greta M.; Miles, A. Keith; Van Vuren, Dirk H.
2010-01-01
Urban development can have multiple effects on mammalian carnivore communities. We conducted a meta-analysis of 7,929 photographs from 217 localities in 11 camera-trap studies across coastal southern California to describe habitat use and determine the effects of urban proximity (distance to urban edge) and intensity (percentage of area urbanized) on carnivore occurrence and species richness in natural habitats close to the urban boundary. Coyotes (Canis latrans) and bobcats (Lynx rufus) were distributed widely across the region. Domestic dogs (Canis lupus familiaris), striped skunks (Mephitis mephitis), raccoons (Procyon lotor), gray foxes (Urocyon cinereoargenteus), mountain lions (Puma concolor), and Virginia opossums (Didelphis virginiana) were detected less frequently, and long-tailed weasels (Mustela frenata), American badgers (Taxidea taxus), western spotted skunks (Spilogale gracilis), and domestic cats (Felis catus) were detected rarely. Habitat use generally reflected availability for most species. Coyote and raccoon occurrence increased with both proximity to and intensity of urbanization, whereas bobcat, gray fox, and mountain lion occurrence decreased with urban proximity and intensity. Domestic dogs and Virginia opossums exhibited positive and weak negative relationships, respectively, with urban intensity but were unaffected by urban proximity. Striped skunk occurrence increased with urban proximity but decreased with urban intensity. Native species richness was negatively associated with urban intensity but not urban proximity, probably because of the stronger negative response of individual species to urban intensity.
A probabilistic tornado wind hazard model for the continental United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, Q; Kimball, J; Mensing, R
A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectanglemore » and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.« less
The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Siebers, J. V.; Keall, P. J.; Mohan, R.
The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.
NASA Astrophysics Data System (ADS)
Bourbeau, D. J.; Hokanson, J. A.; Rubin, J. E.; Weber, D. J.
2011-10-01
Primary afferent microstimulation has been proposed as a method for activating cutaneous and muscle afferent fibers to restore tactile and proprioceptive feedback after limb loss or peripheral neuropathy. Large populations of primary afferent fibers can be accessed directly by implanting microelectrode arrays in the dorsal root ganglia (DRG), which provide a compact and stable target for stimulating a diverse group of sensory fibers. To gain insight into factors affecting the number and types of primary afferents activated, we developed a computational model that simulates the recruitment of fibers in the feline L7 DRG. The model comprises two parts. The first part is a single-fiber model used to describe the current-distance relation and was based on the McIntyre-Richardson-Grill model for excitability. The second part uses the results of the singe-fiber model and published data on fiber size distributions to predict the probability of recruiting a given number of fibers as a function of stimulus intensity. The range of intensities over which exactly one fiber was recruited was approximately 0.5-5 µA (0.1-1 nC per phase); the stimulus intensity at which the probability of recruiting exactly one fiber was maximized was 2.3 µA. However, at 2.3 µA, it was also possible to recruit up to three fibers, albeit with a lower probability. Stimulation amplitudes up to 6 µA were tested with the population model, which showed that as the amplitude increased, the number of fibers recruited increased exponentially. The distribution of threshold amplitudes predicted by the model was similar to that previously reported by in vivo experimentation. Finally, the model suggested that medium diameter fibers (7.3-11.5 µm) may be recruited with much greater probability than large diameter fibers (12.8-16 µm). This model may be used to efficiently test a range of stimulation parameters and nerve morphologies to complement results from electrophysiology experiments and to aid in the design of microelectrode arrays for neural interfaces.
Characteristics of Landslide Size Distribution in Response to Different Rainfall Scenarios
NASA Astrophysics Data System (ADS)
Wu, Y.; Lan, H.; Li, L.
2017-12-01
There have long been controversies on the characteristics of landslide size distribution in response to different rainfall scenarios. For inspecting the characteristics, we have collected a large amount of data, including shallow landslide inventory with landslide areas and landslide occurrence times recorded, and a longtime daily rainfall series fully covering all the landslide occurrences. Three indexes were adopted to quantitatively describe the characteristics of landslide-related rainfall events, which are rainfall duration, rainfall intensity, and the number of rainy days. The first index, rainfall duration, is derived from the exceptional character of a landslide-related rainfall event, which can be explained in terms of the recurrence interval or return period, according to the extreme value theory. The second index, rainfall intensity, is the average rainfall in this duration. The third index is the number of rainy days in this duration. These three indexes were normalized using the standard score method to ensure that they are in the same order of magnitude. Based on these three indexes, landslide-related rainfall events were categorized by a k-means method into four scenarios: moderate rainfall, storm, long-duration rainfall, and long-duration intermittent rainfall. Then, landslides were in turn categorized into four groups according to the scenarios of rainfall events related to them. Inverse-gamma distribution was applied to characterize the area distributions of the four different landslide groups. A tail index and a rollover of the landslide size distribution can be obtained according to the parameters of the distribution. Characteristics of landslide size distribution show that the rollovers of the size distributions of landslides related to storm and long-duration rainfall are larger than those of landslides in the other two groups. It may indicate that the location of rollover may shift right with the increase of rainfall intensity and the extension of rainfall duration. In addition, higher rainfall intensities are prone to trigger larger rainfall-induced landslides since the tail index of landslide area distribution are smaller for higher rainfall intensities, which indicate higher probabilities of large landslides.
Development of a European Ensemble System for Seasonal Prediction: Application to crop yield
NASA Astrophysics Data System (ADS)
Terres, J. M.; Cantelaube, P.
2003-04-01
Western European agriculture is highly intensive and the weather is the main source of uncertainty for crop yield assessment and for crop management. In the current system, at the time when a crop yield forecast is issued, the weather conditions leading up to harvest time are unknown and are therefore a major source of uncertainty. The use of seasonal weather forecast would bring additional information for the remaining crop season and has valuable benefit for improving the management of agricultural markets and environmentally sustainable farm practices. An innovative method for supplying seasonal forecast information to crop simulation models has been developed in the frame of the EU funded research project DEMETER. It consists in running a crop model on each individual member of the seasonal hindcasts to derive a probability distribution of crop yield. Preliminary results of cumulative probability function of wheat yield provides information on both the yield anomaly and the reliability of the forecast. Based on the spread of the probability distribution, the end-user can directly quantify the benefits and risks of taking weather-sensitive decisions.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Failure-probability driven dose painting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.
Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). Themore » total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity.« less
Probabilistic finite elements for fracture mechanics
NASA Technical Reports Server (NTRS)
Besterfield, Glen
1988-01-01
The probabilistic finite element method (PFEM) is developed for probabilistic fracture mechanics (PFM). A finite element which has the near crack-tip singular strain embedded in the element is used. Probabilistic distributions, such as expectation, covariance and correlation stress intensity factors, are calculated for random load, random material and random crack length. The method is computationally quite efficient and can be expected to determine the probability of fracture or reliability.
Possible directions of refining criteria of radiation safety of spaceflights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalev, Y.Y.; Petrov, V.M.; Sakovich, V.A.
The possibility of characterizing space flight radiation safety is considered using a value which is integrated over the flight time, takes into account the radiation processes in an irradiated body and averages the probability of adverse radiobiological effects with respect to the distribution of solar proton flares of varying intensity. The proposed characteristic is compared with the current standards with reference to a hypothetic interplanetary flight.
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
A Minimum Assumption Tornado-Hazard Probability Model.
NASA Astrophysics Data System (ADS)
Schaefer, Joseph T.; Kelly, Donald L.; Abbey, Robert F.
1986-12-01
One of the principle applications of climatological tornado data is in tornado-hazard assessment. To perform such a hazard-potential determination, historical tornado characteristics in either a regional or tom area are complied. A model is then used to determine a site-specific point probability of a tornado greater than a specified intensity occurring. Various models require different climatological input. However, a knowledge of the mean values of tornado track width, tornado track width, tornado affected area and tornado occurrence rate as both a function of tornado intensity and geographic area, along with a violence frequency distribution, enable Mod of the models to be applied.The NSSFC-NRC tornado data base is used to supply input for the determination of these parameters over the United States. This climatic data base has undergone extensive updating and quality control since it was last reported. For track parameters, internally redundant data were used to cheek consistency. Further, reports which derivated significantly from the mean wore individually checked. Intensity data have been compared with the University of Chicago DAPPLE tornado base. All tornadoes whose recorded intensifies differed by more than one category were reclassified by an independent scientist so that the two data sets are consistent.
NASA Technical Reports Server (NTRS)
Gouge, Michael F.
2011-01-01
Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.
Osche, G R
2000-08-20
Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.
Fluctuation behaviors of financial return volatility duration
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Lu, Yunfan
2016-04-01
It is of significantly crucial to understand the return volatility of financial markets because it helps to quantify the investment risk, optimize the portfolio, and provide a key input of option pricing models. The characteristics of isolated high volatility events above certain threshold in price fluctuations and the distributions of return intervals between these events arouse great interest in financial research. In the present work, we introduce a new concept of daily return volatility duration, which is defined as the shortest passage time when the future volatility intensity is above or below the current volatility intensity (without predefining a threshold). The statistical properties of the daily return volatility durations for seven representative stock indices from the world financial markets are investigated. Some useful and interesting empirical results of these volatility duration series about the probability distributions, memory effects and multifractal properties are obtained. These results also show that the proposed stock volatility series analysis is a meaningful and beneficial trial.
Study on typhoon characteristic based on bridge health monitoring system.
Wang, Xu; Chen, Bin; Sun, Dezhang; Wu, Yinqiang
2014-01-01
Through the wind velocity and direction monitoring system installed on Jiubao Bridge of Qiantang River, Hangzhou city, Zhejiang province, China, a full range of wind velocity and direction data was collected during typhoon HAIKUI in 2012. Based on these data, it was found that, at higher observed elevation, turbulence intensity is lower, and the variation tendency of longitudinal and lateral turbulence intensities with mean wind speeds is basically the same. Gust factor goes higher with increasing mean wind speed, and the change rate obviously decreases as wind speed goes down and an inconspicuous increase occurs when wind speed is high. The change of peak factor is inconspicuous with increasing time and mean wind speed. The probability density function (PDF) of fluctuating wind speed follows Gaussian distribution. Turbulence integral scale increases with mean wind speed, and its PDF does not follow Gaussian distribution. The power spectrum of observation fluctuating velocity is in accordance with Von Karman spectrum.
Ellipticity-dependent of multiple ionisation methyl iodide cluster using 532 nm nanosecond laser
NASA Astrophysics Data System (ADS)
Tang, Bin; Zhao, Wuduo; Wang, Weiguo; Hua, Lei; Chen, Ping; Hou, Keyong; Huang, Yunguang; Li, Haiyang
2016-03-01
The dependence of multiply charged ions on laser ellipticity in methyl iodide clusters with 532 nm nanosecond laser was measured using a time-of-flight mass spectrometer. The intensities of multiply charged ions Iq+(q = 2-4) with circularly polarised laser pulse were clearly higher than those with linearly polarised laser pulse but the intensity of single charged ions I+ was inverse. And the dependences of ions on the optical polarisation state were investigated and a flower petal and square distribution for single charged ions (I+, C+) and multiply charged ions (I2+, I3+, I4+, C2+) were observed, respectively. A theoretical calculation was also proposed to simulate the distributions of ions and theoretical results fitted well with the experimental ones. It indicated that the high multiphoton ionisation probability in the initial stage would result in the disintegration of big clusters into small ones and suppress the production of multiply charged ions.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models
NASA Astrophysics Data System (ADS)
Rigler, E. J.; Wiltberger, M. J.; Love, J. J.
2017-12-01
Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.
Landslide Probability Assessment by the Derived Distributions Technique
NASA Astrophysics Data System (ADS)
Muñoz, E.; Ochoa, A.; Martínez, H.
2012-12-01
Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model is used along with the soil characteristic curve (suction vs. moisture) and the Mohr-Coulomb failure criteria in order to calculate the FOS of the slope. Data from two slopes located on steep tropical regions of the cities of Medellín (Colombia) and Rio de Janeiro (Brazil) where used to verify the model's performance. The results indicated significant differences between the obtained FOS values and the behavior observed on the field. The model shows relatively high values of FOS that do not reflect the instability of the analyzed slopes. For the two cases studied, the application of a more simple reliability concept (as the Probability of Failure - PR and Reliability Index - β), instead of a FOS could lead to more realistic results.
Sargeant, Glen A.; Sovada, Marsha A.; Slivinski, Christiane C.; Johnson, Douglas H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997–1999, we searched 355 townships (ca. 93 km) 1–3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ≥1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ≥0.65.
Sargeant, G.A.; Sovada, M.A.; Slivinski, C.C.; Johnson, D.H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997-1999, we searched 355 townships (ca. 93 km2) 1-3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of ?? = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ???1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ???0.65.
NASA Astrophysics Data System (ADS)
Conway, Kim W.; Barrie, J. Vaughn; Krautter, Manfred
2005-09-01
Multibeam imagery of siliceous sponge reefs (Hexactinellida, Hexactinosida) reveals the setting, form, and organization of five reef complexes on the western Canadian continental shelf. The reefs are built by framework skeleton sponges which trap clay-rich sediments resulting in a distinctive pattern of low intensity backscatter from the reefs that colonize more reflective glacial sediments of higher backscatter intensity. Bathymetry and backscatter maps show the distribution and form of reefs in two large complexes in the Queen Charlotte Basin (QCB) covering hundreds of km2, and three smaller reef complexes in the Georgia Basin (GB). Ridges up to 7 km long and 21 m in height, together with diversely shaped, coalescing bioherms and biostromes form the principal reef shape in the QCB whereas chains of wave-form, streamlined mounds up to 14 m in height have developed in the GB. Reef initiation is dependent on the distribution of high backscatter-intensity relict glacial surfaces, and the variation in reef complex morphology is probably the result of tidally driven, near seabed currents.
Cyber-Physical Trade-Offs in Distributed Detection Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.
2010-01-01
We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less
NASA Astrophysics Data System (ADS)
van der Laan, Gerrit; Thole, B. T.
1995-12-01
A simple theory is presented for core-hole polarization probed by resonant photoemission in a two-steps approximation. After excitation from a core level to the valence shell, the core hole decays into two shallower core holes under emission of an electron. The nonspherical core hole and the final state selected cause a specific angle and spin distribution of the emitted electron. The experiment is characterized by the ground-state moments, the polarization of the light, and the spin and angular distribution of the emitted electron. The intensity is a sum over ground-state expectation values of tensor operators times the probability to create a polarized core hole using polarized light, times the probability for decay of such a core hole into the final state. We give general expressions for the angle- and spin-dependent intensities in various regimes of Coulomb and spin-orbit interaction: LS, LSJ, and jjJ coupling. The core-polarization analysis, which generalizes the use of sum rules in x-ray absorption spectroscopy where the integrated peak intensities give ground-state expectation values of the spin and orbital moment operators, makes it possible to measure different linear combinations of these operators. As an application the 2p3/23p3p decay in ferromagnetic nickel is calculated using Hartree-Fock values for the radial matrix elements and phase factors, and compared with experiment, the dichroism is smaller in the 3P final state but stronger in the 1D, 1S peak.
NASA Astrophysics Data System (ADS)
Hirabayashi, A.; Okuda, S.; Nambu, Y.; Fujimoto, T.
1987-01-01
We have developed a new method for determination of atomic transition probabilities based on laser-induced-fluorescence spectroscopy (LIFS). In the method one produces a known population of atoms in the upper level under investigation and relates it to an observed absolute line intensity. We have applied this method to the argon 430.0-nm line (1s4-3p8): In an argon discharge plasma the 1s5-level population and spatial distribution are determined by the self-absorption method combined with LIFS under conditions where the 3p8-level population is much lower than that of the 1s5 level. When intense laser light of 419.1 nm (1s5-3p8) irradiates the plasma and saturates the 3p8-level population, the produced 3p8-level population and its alignment can be determined from the 1s5-level parameters as determined above, by solving the master equation on the basis of broad-line excitation. By comparing the observed absolute fluorescence intensity of the 430.0-nm line with the above population, we have determined the transition probability to be A=(3.94+/-0.60)×105 s-1. We also determined the 3p8-level lifetime by LIFS. Several factors which might affect the measurement are discussed. The result is τ=127+/-10 ns.
Assessing hail risk for a building portfolio by generating stochastic events
NASA Astrophysics Data System (ADS)
Nicolet, Pierrick; Choffet, Marc; Demierre, Jonathan; Imhof, Markus; Jaboyedoff, Michel; Nguyen, Liliane; Voumard, Jérémie
2015-04-01
Among the natural hazards affecting buildings, hail is one of the most costly and is nowadays a major concern for building insurance companies. In Switzerland, several costly events were reported these last years, among which the July 2011 event, which cost around 125 million EUR to the Aargauer public insurance company (North-western Switzerland). This study presents the new developments in a stochastic model which aims at evaluating the risk for a building portfolio. Thanks to insurance and meteorological radar data of the 2011 Aargauer event, vulnerability curves are proposed by comparing the damage rate to the radar intensity (i.e. the maximum hailstone size reached during the event, deduced from the radar signal). From these data, vulnerability is defined by a two-step process. The first step defines the probability for a building to be affected (i.e. to claim damages), while the second, if the building is affected, attributes a damage rate to the building from a probability distribution specific to the intensity class. To assess the risk, stochastic events are then generated by summing a set of Gaussian functions with 6 random parameters (X and Y location, maximum hailstone size, standard deviation, eccentricity and orientation). The location of these functions is constrained by a general event shape and by the position of the previously defined functions of the same event. For each generated event, the total cost is calculated in order to obtain a distribution of event costs. The general events parameters (shape, size, …) as well as the distribution of the Gaussian parameters are inferred from two radar intensity maps, namely the one of the aforementioned event, and a second from an event which occurred in 2009. After a large number of simulations, the hailstone size distribution obtained in different regions is compared to the distribution inferred from pre-existing hazard maps, built from a larger set of radar data. The simulation parameters are then adjusted by trial and error, in order to get the best reproduction of the expected distributions. The value of the mean annual risk obtained using the model is also compared to the mean annual risk calculated using directly the hazard maps. According to the first results, the return period of an event inducing a total damage cost equal or greater than 125 million EUR for the Aargauer insurance company would be of around 10 to 40 years.
Lévy noise improves the electrical activity in a neuron under electromagnetic radiation.
Wu, Juan; Xu, Yong; Ma, Jun
2017-01-01
As the fluctuations of the internal bioelectricity of nervous system is various and complex, the external electromagnetic radiation induced by magnet flux on membrane can be described by the non-Gaussian type distribution of Lévy noise. Thus, the electrical activities in an improved Hindmarsh-Rose model excited by the external electromagnetic radiation of Lévy noise are investigated and some interesting modes of the electrical activities are exhibited. The external electromagnetic radiation of Lévy noise leads to the mode transition of the electrical activities and spatial phase, such as from the rest state to the firing state, from the spiking state to the spiking state with more spikes, and from the spiking state to the bursting state. Then the time points of the firing state versus Lévy noise intensity are depicted. The increasing of Lévy noise intensity heightens the neuron firing. Also the stationary probability distribution functions of the membrane potential of the neuron induced by the external electromagnetic radiation of Lévy noise with different intensity, stability index and skewness papremeters are analyzed. Moreover, through the positive largest Lyapunov exponent, the parameter regions of chaotic electrical mode of the neuron induced by the external electromagnetic radiation of Lévy noise distribution are detected.
Lévy noise improves the electrical activity in a neuron under electromagnetic radiation
Wu, Juan; Ma, Jun
2017-01-01
As the fluctuations of the internal bioelectricity of nervous system is various and complex, the external electromagnetic radiation induced by magnet flux on membrane can be described by the non-Gaussian type distribution of Lévy noise. Thus, the electrical activities in an improved Hindmarsh-Rose model excited by the external electromagnetic radiation of Lévy noise are investigated and some interesting modes of the electrical activities are exhibited. The external electromagnetic radiation of Lévy noise leads to the mode transition of the electrical activities and spatial phase, such as from the rest state to the firing state, from the spiking state to the spiking state with more spikes, and from the spiking state to the bursting state. Then the time points of the firing state versus Lévy noise intensity are depicted. The increasing of Lévy noise intensity heightens the neuron firing. Also the stationary probability distribution functions of the membrane potential of the neuron induced by the external electromagnetic radiation of Lévy noise with different intensity, stability index and skewness papremeters are analyzed. Moreover, through the positive largest Lyapunov exponent, the parameter regions of chaotic electrical mode of the neuron induced by the external electromagnetic radiation of Lévy noise distribution are detected. PMID:28358824
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu
2013-06-01
Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk
2015-06-01
The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.
Twenty-five years of change in southern African passerine diversity: nonclimatic factors of change.
Péron, Guillaume; Altwegg, Res
2015-09-01
We analysed more than 25 years of change in passerine bird distribution in South Africa, Swaziland and Lesotho, to show that species distributions can be influenced by processes that are at least in part independent of the local strength and direction of climate change: land use and ecological succession. We used occupancy models that separate species' detection from species' occupancy probability, fitted to citizen science data from both phases of the Southern African Bird Atlas Project (1987-1996 and 2007-2013). Temporal trends in species' occupancy probability were interpreted in terms of local extinction/colonization, and temporal trends in detection probability were interpreted in terms of change in abundance. We found for the first time at this scale that, as predicted in the context of bush encroachment, closed-savannah specialists increased where open-savannah specialists decreased. In addition, the trend in the abundance of species a priori thought to be favoured by agricultural conversion was negatively correlated with human population density, which is in line with hypotheses explaining the decline in farmland birds in the Northern Hemisphere. In addition to climate, vegetation cover and the intensity and time since agricultural conversion constitute important predictors of biodiversity changes in the region. Their inclusion will improve the reliability of predictive models of species distribution. © 2015 John Wiley & Sons Ltd.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Probability mapping of scarred myocardium using texture and intensity features in CMR images
2013-01-01
Background The myocardium exhibits heterogeneous nature due to scarring after Myocardial Infarction (MI). In Cardiac Magnetic Resonance (CMR) imaging, Late Gadolinium (LG) contrast agent enhances the intensity of scarred area in the myocardium. Methods In this paper, we propose a probability mapping technique using Texture and Intensity features to describe heterogeneous nature of the scarred myocardium in Cardiac Magnetic Resonance (CMR) images after Myocardial Infarction (MI). Scarred tissue and non-scarred tissue are represented with high and low probabilities, respectively. Intermediate values possibly indicate areas where the scarred and healthy tissues are interwoven. The probability map of scarred myocardium is calculated by using a probability function based on Bayes rule. Any set of features can be used in the probability function. Results In the present study, we demonstrate the use of two different types of features. One is based on the mean intensity of pixel and the other on underlying texture information of the scarred and non-scarred myocardium. Examples of probability maps computed using the mean intensity of pixel and the underlying texture information are presented. We hypothesize that the probability mapping of myocardium offers alternate visualization, possibly showing the details with physiological significance difficult to detect visually in the original CMR image. Conclusion The probability mapping obtained from the two features provides a way to define different cardiac segments which offer a way to identify areas in the myocardium of diagnostic importance (like core and border areas in scarred myocardium). PMID:24053280
Ezquerro, Fernando; Moffa, Adriano H; Bikson, Marom; Khadka, Niranjan; Aparicio, Luana V M; de Sampaio-Junior, Bernardo; Fregni, Felipe; Bensenor, Isabela M; Lotufo, Paulo A; Pereira, Alexandre Costa; Brunoni, Andre R
2017-04-01
To evaluate whether and to which extent skin redness (erythema) affects investigator blinding in transcranial direct current stimulation (tDCS) trials. Twenty-six volunteers received sham and active tDCS, which was applied with saline-soaked sponges of different thicknesses. High-resolution skin images, taken before and 5, 15, and 30 min after stimulation, were randomized and presented to experienced raters who evaluated erythema intensity and judged on the likelihood of stimulation condition (sham vs. active). In addition, semi-automated image processing generated probability heatmaps and surface area coverage of erythema. Adverse events were also collected. Erythema was present, but less intense in sham compared to active groups. Erythema intensity was inversely and directly associated to correct sham and active stimulation group allocation, respectively. Our image analyses found that erythema also occurs after sham and its distribution is homogenous below electrodes. Tingling frequency was higher using thin compared to thick sponges, whereas erythema was more intense under thick sponges. Optimal investigator blinding is achieved when erythema after tDCS is mild. Erythema distribution under the electrode is patchy, occurs after sham tDCS and varies according to sponge thickness. We discuss methods to address skin erythema-related tDCS unblinding. © 2016 International Neuromodulation Society.
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-07-01
The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.
Buechel, Eva C.; Zhang, Jiao; Morewedge, Carey K.; Vosgerau, Joachim
2014-01-01
We propose that affective forecasters overestimate the extent to which experienced hedonic responses to an outcome are influenced by the probability of its occurrence. The experience of an outcome (e.g., winning a gamble) is typically more affectively intense than the simulation of that outcome (e.g., imagining winning a gamble) upon which the affective forecast for it is based. We suggest that, as a result, experiencers allocate a larger share of their attention toward the outcome (e.g., winning the gamble) and less to its probability specifications than do affective forecasters. Consequently, hedonic responses to an outcome are less sensitive to its probability specifications than are affective forecasts for that outcome. The results of 6 experiments provide support for our theory. Affective forecasters overestimated how sensitive experiencers would be to the probability of positive and negative outcomes (Experiments 1 and 2). Consistent with our attentional account, differences in sensitivity to probability specifications disappeared when the attention of forecasters was diverted from probability specifications (Experiment 3) or when the attention of experiencers was drawn toward probability specifications (Experiment 4). Finally, differences in sensitivity to probability specifications between forecasters and experiencers were diminished when the forecasted outcome was more affectively intense (Experiments 5 and 6). PMID:24128184
Buechel, Eva C; Zhang, Jiao; Morewedge, Carey K; Vosgerau, Joachim
2014-01-01
We propose that affective forecasters overestimate the extent to which experienced hedonic responses to an outcome are influenced by the probability of its occurrence. The experience of an outcome (e.g., winning a gamble) is typically more affectively intense than the simulation of that outcome (e.g., imagining winning a gamble) upon which the affective forecast for it is based. We suggest that, as a result, experiencers allocate a larger share of their attention toward the outcome (e.g., winning the gamble) and less to its probability specifications than do affective forecasters. Consequently, hedonic responses to an outcome are less sensitive to its probability specifications than are affective forecasts for that outcome. The results of 6 experiments provide support for our theory. Affective forecasters overestimated how sensitive experiencers would be to the probability of positive and negative outcomes (Experiments 1 and 2). Consistent with our attentional account, differences in sensitivity to probability specifications disappeared when the attention of forecasters was diverted from probability specifications (Experiment 3) or when the attention of experiencers was drawn toward probability specifications (Experiment 4). Finally, differences in sensitivity to probability specifications between forecasters and experiencers were diminished when the forecasted outcome was more affectively intense (Experiments 5 and 6).
Estimating earthquake magnitudes from reported intensities in the central and eastern United States
Boyd, Oliver; Cramer, Chris H.
2014-01-01
A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris D,
2010-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from TRMM data as a cluster of pixels with an 85 GHz polarization-corrected brightness temperature below 255 K and with an area at least 64 km 2. The study database consisted of convective systems in West Africa from May Sep for 1998-2007 and in the western Pacific from May Nov 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences among the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Sub-setting the database revealed some sensitivity in distribution shape to the size of the sampling area, length of sample period, and climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is wetter or drier than normal.
Bivariate at-site frequency analysis of simulated flood peak-volume data using copulas
NASA Astrophysics Data System (ADS)
Gaál, Ladislav; Viglione, Alberto; Szolgay, Ján.; Blöschl, Günter; Bacigál, Tomáå.¡
2010-05-01
In frequency analysis of joint hydro-climatological extremes (flood peaks and volumes, low flows and durations, etc.), usually, bivariate distribution functions are fitted to the observed data in order to estimate the probability of their occurrence. Bivariate models, however, have a number of limitations; therefore, in the recent past, dependence models based on copulas have gained increased attention to represent the joint probabilities of hydrological characteristics. Regardless of whether standard or copula based bivariate frequency analysis is carried out, one is generally interested in the extremes corresponding to low probabilities of the fitted joint cumulative distribution functions (CDFs). However, usually there is not enough flood data in the right tail of the empirical CDFs to derive reliable statistical inferences on the behaviour of the extremes. Therefore, different techniques are used to extend the amount of information for the statistical inference, i.e., temporal extension methods that allow for making use of historical data or spatial extension methods such as regional approaches. In this study, a different approach was adopted which uses simulated flood data by rainfall-runoff modelling, to increase the amount of data in the right tail of the CDFs. In order to generate artificial runoff data (i.e. to simulate flood records of lengths of approximately 106 years), a two-step procedure was used. (i) First, the stochastic rainfall generator proposed by Sivapalan et al. (2005) was modified for our purpose. This model is based on the assumption of discrete rainfall events whose arrival times, durations, mean rainfall intensity and the within-storm intensity patterns are all random, and can be described by specified distributions. The mean storm rainfall intensity is disaggregated further to hourly intensity patterns. (ii) Secondly, the simulated rainfall data entered a semi-distributed conceptual rainfall-runoff model that consisted of a snow routine, a soil moisture routine and a flow routing routine (Parajka et al., 2007). The applicability of the proposed method was demonstrated on selected sites in Slovakia and Austria. The pairs of simulated flood volumes and flood peaks were analysed in terms of their dependence structure and different families of copulas (Archimedean, extreme value, Gumbel-Hougaard, etc.) were fitted to the observed and simulated data. The question to what extent measured data can be used to find the right copula was discussed. The study is supported by the Austrian Academy of Sciences and the Austrian-Slovak Co-operation in Science and Education "Aktion". Parajka, J., Merz, R., Blöschl, G., 2007: Uncertainty and multiple objective calibration in regional water balance modeling - Case study in 320 Austrian catchments. Hydrological Processes, 21, 435-446. Sivapalan, M., Blöschl, G., Merz, R., Gutknecht, D., 2005: Linking flood frequency to long-term water balance: incorporating effects of seasonality. Water Resources Research, 41, W06012, doi:10.1029/2004WR003439.
Apomorphine and piribedil in rats: biochemical and pharmacologic studies.
Butterworth, R F; Poignant, J C; Barbeau, A
1975-01-01
We studied the biochemical and pharmacologic modes of action of piribedil and apomorphine in the rat. Although both drugs have many points in common, they are also different in many of their manifestations. Apomorphine causes high-intensity, short-duration stereotyped behavior; it is distributed within the brain in uneven fashion, the striatum being the area of lowest concentration as measured by fluorometry. Direct stereotactic injection within the dopaminergic mesolimbic system, and particularly the tuberculum olfactorium, produced constant intense responses. All effects of apomorphine can be blocked by pimozide, but propanolol, a beta blocker, only reduces aggression and ferocity, leaving stereotyped behaviors intact. Finally, L-5-HTP tends to reduce aggression, ferocity, and to a lesser extent stereotypy; MIF or piribedil, as well as reserpine, potentiates the stereotyped behaviors induced by apomorphine, whereas L-DOPA usually decreases them. Piribedil, on the other hand, causes low-intensity, long-duration stereotyped behavior. It is distributed within the brain almost uniformly. Most effects of piribedil can be blocked by pimozide, but propanolol blocks only aggression and ferocity, leaving stereotyped behaviors intact. On the other hand, clonidine, an alpha-receptor agonist, blocks stereotyped behaviors induced by piribedil but markedly increases aggression, ferocity, and motor activity. L-5-HTP and L-DOPA have little effect on piribedil-induced manifestations. Reserpine decreases piribedil stereotypy. The main metabolite of piribedil, S 584, had no clear-cut pharmacologic action in our hands at the dosage used. It is concluded that both apomorphine and piribedil produce stereotyped behavior by modifying the physiologic balance between mesolimbic and nigrostriatal dopaminergic systems. The other actions of apomorphine and piribedil upon aggression, ferocity, and motor activity are not always in parallel and depend probably on the fact that piribedil is less specific, affecting also noradrenergic, serotonergic, histaminergic, and/or cholinergic circuits. The usefulness of piribedil against some forms of human tremor and its low-intensity antiakinetic action probably result from this pattern of pharmacologic activity.
Towner, Rheal A; Wisniewski, Amy B; Wu, Dee H; Van Gordon, Samuel B; Smith, Nataliya; North, Justin C; McElhaney, Rayburt; Aston, Christopher E; Shobeiri, S Abbas; Kropp, Bradley P; Greenwood-Van Meerveld, Beverley; Hurst, Robert E
2016-03-01
Interstitial cystitis/bladder pain syndrome is a bladder pain disorder associated with voiding symptomatology and other systemic chronic pain disorders. Currently diagnosing interstitial cystitis/bladder pain syndrome is complicated as patients present with a wide range of symptoms, physical examination findings and clinical test responses. One hypothesis is that interstitial cystitis symptoms arise from increased bladder permeability to urine solutes. This study establishes the feasibility of using contrast enhanced magnetic resonance imaging to quantify bladder permeability in patients with interstitial cystitis. Permeability alterations in bladder urothelium were assessed by intravesical administration of the magnetic resonance imaging contrast agent Gd-DTPA (Gd-diethylenetriaminepentaacetic acid) in a small cohort of patients. Magnetic resonance imaging signal intensity in patient and control bladders was compared regionally and for entire bladders. Quantitative assessment of magnetic resonance imaging signal intensity indicated a significant increase in signal intensity in anterior bladder regions compared to posterior regions in patients with interstitial cystitis (p <0.01) and significant increases in signal intensity in anterior bladder regions (p <0.001). Kurtosis (shape of probability distribution) and skewness (measure of probability distribution asymmetry) were associated with contrast enhancement in total bladders in patients with interstitial cystitis vs controls (p <0.05). Regarding symptomatology interstitial cystitis cases differed significantly from controls on the SF-36®, PUF (Pelvic Pain and Urgency/Frequency) and ICPI (Interstitial Cystitis Problem Index) questionnaires with no overlap in the score range in each group. ICSI (Interstitial Cystitis Symptom Index) differed significantly but with a slight overlap in the range of scores. Data suggest that contrast enhanced magnetic resonance imaging provides an objective, quantifiable measurement of bladder permeability that could be used to stratify bladder pain patients and monitor therapy. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Harmouche, Rola; Subbanna, Nagesh K; Collins, D Louis; Arnold, Douglas L; Arbel, Tal
2015-05-01
In this paper, a fully automatic probabilistic method for multiple sclerosis (MS) lesion classification is presented, whereby the posterior probability density function over healthy tissues and two types of lesions (T1-hypointense and T2-hyperintense) is generated at every voxel. During training, the system explicitly models the spatial variability of the intensity distributions throughout the brain by first segmenting it into distinct anatomical regions and then building regional likelihood distributions for each tissue class based on multimodal magnetic resonance image (MRI) intensities. Local class smoothness is ensured by incorporating neighboring voxel information in the prior probability through Markov random fields. The system is tested on two datasets from real multisite clinical trials consisting of multimodal MRIs from a total of 100 patients with MS. Lesion classification results based on the framework are compared with and without the regional information, as well as with other state-of-the-art methods against the labels from expert manual raters. The metrics for comparison include Dice overlap, sensitivity, and positive predictive rates for both voxel and lesion classifications. Statistically significant improvements in Dice values ( ), for voxel-based and lesion-based sensitivity values ( ), and positive predictive rates ( and respectively) are shown when the proposed method is compared to the method without regional information, and to a widely used method [1]. This holds particularly true in the posterior fossa, an area where classification is very challenging. The proposed method allows us to provide clinicians with accurate tissue labels for T1-hypointense and T2-hyperintense lesions, two types of lesions that differ in appearance and clinical ramifications, and with a confidence level in the classification, which helps clinicians assess the classification results.
NASA Astrophysics Data System (ADS)
Gulyaeva, Tamara; Stanislawska, Iwona; Arikan, Feza; Arikan, Orhan
The probability of occurrence of the positive and negative planetary ionosphere storms is evaluated using the W index maps produced from Global Ionospheric Maps of Total Electron Content, GIM-TEC, provided by Jet Propulsion Laboratory, and transformed from geographic coordinates to magnetic coordinates frame. The auroral electrojet AE index and the equatorial disturbance storm time Dst index are investigated as precursors of the global ionosphere storm. The superposed epoch analysis is performed for 77 intense storms (Dst≤-100 nT) and 227 moderate storms (-100
Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang
2014-08-25
The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.
Photocounting distributions for exponentially decaying sources.
Teich, M C; Card, H C
1979-05-01
Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Intensity-enhanced MART for tomographic PIV
NASA Astrophysics Data System (ADS)
Wang, HongPing; Gao, Qi; Wei, RunJie; Wang, JinJun
2016-05-01
A novel technique to shrink the elongated particles and suppress the ghost particles in particle reconstruction of tomographic particle image velocimetry is presented. This method, named as intensity-enhanced multiplicative algebraic reconstruction technique (IntE-MART), utilizes an inverse diffusion function and an intensity suppressing factor to improve the quality of particle reconstruction and consequently the precision of velocimetry. A numerical assessment about vortex ring motion with and without image noise is performed to evaluate the new algorithm in terms of reconstruction, particle elongation and velocimetry. The simulation is performed at seven different seeding densities. The comparison of spatial filter MART and IntE-MART on the probability density function of particle peak intensity suggests that one of the local minima of the distribution can be used to separate the ghosts and actual particles. Thus, ghost removal based on IntE-MART is also introduced. To verify the application of IntE-MART, a real plate turbulent boundary layer experiment is performed. The result indicates that ghost reduction can increase the accuracy of RMS of velocity field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ramirez Aviles, Camila A.
We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility’s ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor’s location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i)more » test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.« less
Decision algorithm for data center vortex beam receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2017-12-01
We present a new scheme for a vortex beam communications system which exploits the radial component p of Laguerre-Gauss modes in addition to the azimuthal component l generally used. We derive a new encoding algorithm which makes use of the spatial distribution of intensity to create an alphabet dictionary for communication. We suggest an application of the scheme as part of an optical wireless link for intra data center communication. We investigate the probability of error in decoding, for several detector options.
The neuroeconomic path of the law.
Hoffman, Morris B
2004-01-01
Advances in evolutionary biology, experimental economics and neuroscience are shedding new light on age-old questions about right and wrong, justice, freedom, the rule of law and the relationship between the individual and the state. Evidence is beginning to accumulate suggesting that humans evolved certain fundamental behavioural predispositions grounded in our intense social natures, that those predispositions are encoded in our brains as a distribution of probable behaviours, and therefore that there may be a core of universal human law. PMID:15590608
Generation of attosecond electron beams in relativistic ionization by short laser pulses
NASA Astrophysics Data System (ADS)
Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.
2018-03-01
Ionization by relativistically intense short laser pulses is studied in the framework of strong-field quantum electrodynamics. Distinctive patterns are found in the energy probability distributions of photoelectrons, which are sensitive to the properties of a driving laser field. It is demonstrated that these electrons are generated in the form of solitary attosecond wave packets. This is particularly important in light of various applications of attosecond electron beams such as in ultrafast electron diffraction and crystallography, or in time-resolved electron microscopy of physical, chemical, and biological processes. We also show that, for intense laser pulses, high-energy ionization takes place in narrow regions surrounding the momentum spiral, the exact form of which is determined by the shape of a driving pulse. The self-intersections of the spiral define the momenta for which the interference patterns in the energy distributions of photoelectrons are observed. Furthermore, these interference regions lead to the synthesis of single-electron wave packets characterized by coherent double-hump structures.
Laser beam propagation in atmospheric turbulence
NASA Technical Reports Server (NTRS)
Murty, S. S. R.
1979-01-01
The optical effects of atmospheric turbulence on the propagation of low power laser beams are reviewed in this paper. The optical effects are produced by the temperature fluctuations which result in fluctuations of the refractive index of air. The commonly-used models of index-of-refraction fluctuations are presented. Laser beams experience fluctuations of beam size, beam position, and intensity distribution within the beam due to refractive turbulence. Some of the observed effects are qualitatively explained by treating the turbulent atmosphere as a collection of moving gaseous lenses of various sizes. Analytical results and experimental verifications of the variance, covariance and probability distribution of intensity fluctuations in weak turbulence are presented. For stronger turbulence, a saturation of the optical scintillations is observed. The saturation of scintillations involves a progressive break-up of the beam into multiple patches; the beam loses some of its lateral coherence. Heterodyne systems operating in a turbulent atmosphere experience a loss of heterodyne signal due to the destruction of coherence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yuan; Ma, Po-Lun; Jiang, Jonathan H.
The attribution of the widely observed shifted precipitation extremes to different forcing agents represents a critical issue for understanding of changes in the hydrological cycle. To compare aerosol and greenhouse-gas effects on the historical trends of precipitation intensity, we performed AMIP-style NCAR/DOE CAM5 model simulations from 1950-2005 with and without anthropogenic aerosol forcings. Precipitation rates at every time step in CAM5 are used to construct precipitation probability distribution functions. By contrasting the two sets of experiments, we found that the global warming induced by the accumulating greenhouse gases is responsible for the changes in precipitation intensity at the global scale.more » However, regionally over the Eastern China, the drastic increase in anthropogenic aerosols primarily accounts for the observed light precipitation suppression since the 1950s. Compared with aerosol radiative effects, aerosol microphysical effect has a predominant role in determining the historical trends of precipitation intensity in Eastern China.« less
NASA Astrophysics Data System (ADS)
Wang, Zhiqiang; Jiang, Jingyi; Ma, Qing
2016-12-01
Climate change is affecting every aspect of human activities, especially the agriculture. In China, extreme drought events caused by climate change have posed a great threat to food safety. In this work we aimed to study the drought risk of maize in the farming-pastoral ecotone in Northern China based on physical vulnerability assessment. The physical vulnerability curve was constructed from the relationship between drought hazard intensity index and yield loss rate. The risk assessment of agricultural drought was conducted from the drought hazard intensity index and physical vulnerability curve. The probability distribution of drought hazard intensity index decreased from south-west to north-east and increased from south-east to north-west along the rainfall isoline. The physical vulnerability curve had a reduction effect in three parts of the farming-pastoral ecotone in Northern China, which helped to reduce drought hazard vulnerability on spring maize. The risk of yield loss ratio calculated based on physical vulnerability curve was lower compared with the drought hazard intensity index, which suggested that the capacity of spring maize to resist and adapt to drought is increasing. In conclusion, the farming-pastoral ecotone in Northern China is greatly sensitive to climate change and has a high probability of severe drought hazard. Risk assessment of physical vulnerability can help better understand the physical vulnerability to agricultural drought and can also promote measurements to adapt to climate change.
Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A.; Trofimov, Alexei
2013-01-01
Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled. PMID:23635256
Zeng, Chuan; Giantsoudi, Drosoula; Grassberger, Clemens; Goldberg, Saveli; Niemierko, Andrzej; Paganetti, Harald; Efstathiou, Jason A; Trofimov, Alexei
2013-05-01
Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposed lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.
Source Region and Growth Analysis of Narrowband Z-mode Emission at Saturn
NASA Astrophysics Data System (ADS)
Menietti, J. D.; Pisa, D.; Santolik, O.; Ye, S.; Arridge, C. S.; Coates, A. J.
2015-12-01
Z-mode intensity levels can be significant in the lower density region near the inner edge of the Enceladus torus at Saturn, where these waves may resonate with electrons at MeV energies. The source mechanism of this emission, which is narrow banded and most intense near 5 kHz, is not yet well understood. We survey the Cassini Radio and Plasma Wave Science (RPWS) data to isolate several probable source regions. Electron phase space distributions are obtained from the Cassini Electron Spectrometer (ELS), a part of the Cassini Plasma Spectrometer (CAPS) investigation. These data are analyzed in seeking the wave source mechanism, free energy source and growth rate of Z-mode observations. We present the first results of our analysis.
Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model
NASA Astrophysics Data System (ADS)
Berglund, Nils; Landon, Damien
2012-08-01
We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of neuronal action potentials in parameter regimes characterized by mixed-mode oscillations. The interspike time interval is related to the random number of small-amplitude oscillations separating consecutive spikes. We prove that this number has an asymptotically geometric distribution, whose parameter is related to the principal eigenvalue of a substochastic Markov chain. We provide rigorous bounds on this eigenvalue in the small-noise regime and derive an approximation of its dependence on the system's parameters for a large range of noise intensities. This yields a precise description of the probability distribution of observed mixed-mode patterns and interspike intervals.
A missing dimension in measures of vaccination impacts
Gomes, M. Gabriela M.; Lipsitch, Marc; Wargo, Andrew R.; Kurath, Gael; Rebelo, Carlota; Medley, Graham F.; Coutinho, Antonio
2013-01-01
Immunological protection, acquired from either natural infection or vaccination, varies among hosts, reflecting underlying biological variation and affecting population-level protection. Owing to the nature of resistance mechanisms, distributions of susceptibility and protection entangle with pathogen dose in a way that can be decoupled by adequately representing the dose dimension. Any infectious processes must depend in some fashion on dose, and empirical evidence exists for an effect of exposure dose on the probability of transmission to mumps-vaccinated hosts [1], the case-fatality ratio of measles [2], and the probability of infection and, given infection, of symptoms in cholera [3]. Extreme distributions of vaccine protection have been termed leaky (partially protects all hosts) and all-or-nothing (totally protects a proportion of hosts) [4]. These distributions can be distinguished in vaccine field trials from the time dependence of infections [5]. Frailty mixing models have also been proposed to estimate the distribution of protection from time to event data [6], [7], although the results are not comparable across regions unless there is explicit control for baseline transmission [8]. Distributions of host susceptibility and acquired protection can be estimated from dose-response data generated under controlled experimental conditions [9]–[11] and natural settings [12], [13]. These distributions can guide research on mechanisms of protection, as well as enable model validity across the entire range of transmission intensities. We argue for a shift to a dose-dimension paradigm in infectious disease science and community health.
Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit
NASA Astrophysics Data System (ADS)
Vittaldev, Vivek; Russell, Ryan P.
2017-09-01
Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.
NASA Astrophysics Data System (ADS)
Carreau, J.; Naveau, P.; Neppel, L.
2017-05-01
The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).
The minimum area requirements (MAR) for giant panda: an empirical study
Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang
2016-01-01
Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population’s long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive. PMID:27929520
The minimum area requirements (MAR) for giant panda: an empirical study.
Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang
2016-12-08
Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.
Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data
NASA Astrophysics Data System (ADS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.
2017-02-01
We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.
Knock probability estimation through an in-cylinder temperature model with exogenous noise
NASA Astrophysics Data System (ADS)
Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.
2018-01-01
This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.
Arterial tree tracking from anatomical landmarks in magnetic resonance angiography scans
NASA Astrophysics Data System (ADS)
O'Neil, Alison; Beveridge, Erin; Houston, Graeme; McCormick, Lynne; Poole, Ian
2014-03-01
This paper reports on arterial tree tracking in fourteen Contrast Enhanced MRA volumetric scans, given the positions of a predefined set of vascular landmarks, by using the A* algorithm to find the optimal path for each vessel based on voxel intensity and a learnt vascular probability atlas. The algorithm is intended for use in conjunction with an automatic landmark detection step, to enable fully automatic arterial tree tracking. The scan is filtered to give two further images using the top-hat transform with 4mm and 8mm cubic structuring elements. Vessels are then tracked independently on the scan in which the vessel of interest is best enhanced, as determined from knowledge of typical vessel diameter and surrounding structures. A vascular probability atlas modelling expected vessel location and orientation is constructed by non-rigidly registering the training scans to the test scan using a 3D thin plate spline to match landmark correspondences, and employing kernel density estimation with the ground truth center line points to form a probability density distribution. Threshold estimation by histogram analysis is used to segment background from vessel intensities. The A* algorithm is run using a linear cost function constructed from the threshold and the vascular atlas prior. Tracking results are presented for all major arteries excluding those in the upper limbs. An improvement was observed when tracking was informed by contextual information, with particular benefit for peripheral vessels.
Frost risks in the Mantaro river basin
NASA Astrophysics Data System (ADS)
Trasmonte, G.; Chavez, R.; Segura, B.; Rosales, J. L.
2008-04-01
As part of the study on the Mantaro river basin's (central Andes of Perú) current vulnerability to climate change, the temporal and spatial characteristics of frosts were analysed. These characteristics included intensity, frequency, duration, frost-free periods, area distribution and historical trends. Maps of frost risk were determined for the entire river basin, by means of mathematical algorithms and GIS (Geographic Information Systems) tools, using minimum temperature - 1960 to 2002 period, geomorphology, slope, land-use, types of soils, vegetation and life zones, emphasizing the rainy season (September to April), when the impacts of frost on agriculture are most severe. We recognized four categories of frost risks: low, moderate, high and critical. The critical risks (with a very high probability of occurrence) were related to high altitudes on the basin (altitudes higher than 3800 m a.s.l.), while the low (or null) probability of occurring risks were found in the lower zones (less than 2500 m a.s.l.). Because of the very intense agricultural activity and the high sensitivity of the main crops (Maize, potato, artichoke) in the Mantaro valley (altitudes between 3100 and 3300 m a.s.l.), moderate to high frost risks can be expected, with a low to moderate probability of occurrence. Another significant result was a positive trend of 8 days per decade in the number of frost days during the rainy season.
NASA Astrophysics Data System (ADS)
Núñez Gutiérrez, M.
2013-05-01
In recent years, there has been a change in regard to the hazard of flooding in the basin environment "Salting" specifically in watersheds of streams "El Zarco" and "Tamarind", located in the area of Township north of Puerto Vallara, Jalisco, lately it has become precipitation, of a cyclonic convective having with it, but short-lived intensive storms, and coupled with the growth of the metropolitan area of Puerto Vallarta, which has clogged up the drainage outlet sea water stored on site until it disappears evapotranspiration. Hydrometeorological analysis is performed based on the triangulation method where hydrometric records are used, by the weather station of "The Desembocada" of Puerto Vallarta, which is the only one authorized by the CNA, however the main source that handles official values of the weather stations in the Mexican Republic, is the database ERIC III (Rapid Information Extractor climatological version III), and in their weather stations precipitation data and temperature average, minimum and maximum monthly are available. This is combined with probabilistic methods, based on the exploration of the probability distribution function (FDP) with the method of small distributions where methods are used Pearson's chi-square, Student t, Fisher F, for smaller data less than 30 years and the functions of discrete or continuous probability to estimate rainfall intensity also used digital terrain models with sufficient mapping for elevations, precipitation, temperature (SIG).;
Climate Change Impact on Variability of Rainfall Intensity in Upper Blue Nile Basin, Ethiopia
NASA Astrophysics Data System (ADS)
Worku, L. Y.
2015-12-01
Extreme rainfall events are major problems in Ethiopia with the resulting floods that usually could cause significant damage to agriculture, ecology, infrastructure, disruption to human activities, loss of property, loss of lives and disease outbreak. The aim of this study was to explore the likely changes of precipitation extreme changes due to future climate change. The study specifically focuses to understand the future climate change impact on variability of rainfall intensity-duration-frequency in Upper Blue Nile basin. Precipitations data from two Global Climate Models (GCMs) have been used in the study are HadCM3 and CGCM3. Rainfall frequency analysis was carried out to estimate quantile with different return periods. Probability Weighted Method (PWM) selected estimation of parameter distribution and L-Moment Ratio Diagrams (LMRDs) used to find the best parent distribution for each station. Therefore, parent distributions for derived from frequency analysis are Generalized Logistic (GLOG), Generalized Extreme Value (GEV), and Gamma & Pearson III (P3) parent distribution. After analyzing estimated quantile simple disaggregation model was applied in order to find sub daily rainfall data. Finally the disaggregated rainfall is fitted to find IDF curve and the result shows in most parts of the basin rainfall intensity expected to increase in the future. As a result of the two GCM outputs, the study indicates there will be likely increase of precipitation extremes over the Blue Nile basin due to the changing climate. This study should be interpreted with caution as the GCM model outputs in this part of the world have huge uncertainty.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris
2009-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from Tropical Rainfall Measuring Mission (TRMM) data as a cluster of pixels with an 85-GHz polarization-corrected brightness temperature below 255 K and with an area of at least 64 square kilometers. The study database consisted of convective systems in West Africa from May to September 1998-2007, and in the western Pacific from May to November 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences between the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Subsetting the database revealed some sensitivity in distribution shape to the size of the sampling area, the length of the sample period, and the climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is either wetter or drier than normal.
Quantitative histogram analysis of images
NASA Astrophysics Data System (ADS)
Holub, Oliver; Ferreira, Sérgio T.
2006-11-01
A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None
Puc, Małgorzata; Wolski, Tomasz
2013-01-01
The allergenic pollen content of the atmosphere varies according to climate, biogeography and vegetation. Minimisation of the pollen allergy symptoms is related to the possibility of avoidance of large doses of the allergen. Measurements performed in Szczecin over a period of 13 years (2000-2012 inclusive) permitted prediction of theoretical maximum concentrations of pollen grains and their probability for the pollen season of Poaceae, Artemisia and Ambrosia. Moreover, the probabilities were determined of a given date as the beginning of the pollen season, the date of the maximum pollen count, Seasonal Pollen Index value and the number of days with pollen count above threshold values. Aerobiological monitoring was conducted using a Hirst volumetric trap (Lanzoni VPPS). Linear trend with determination coefficient (R(2)) was calculated. Model for long-term forecasting was performed by the method based on Gumbel's distribution. A statistically significant negative correlation was determined between the duration of pollen season of Poaceae and Artemisia and the Seasonal Pollen Index value. Seasonal, total pollen counts of Artemisia and Ambrosia showed a strong and statistically significant decreasing tendency. On the basis of Gumbel's distribution, a model was proposed for Szczecin, allowing prediction of the probabilities of the maximum pollen count values that can appear once in e.g. 5, 10 or 100 years. Short pollen seasons are characterised by a higher intensity of pollination than long ones. Prediction of the maximum pollen count values, dates of the pollen season beginning, and the number of days with pollen count above the threshold, on the basis of Gumbel's distribution, is expected to lead to improvement in the prophylaxis and therapy of persons allergic to pollen.
New Possibilities of Positron-Emission Tomography
NASA Astrophysics Data System (ADS)
Volobuev, A. N.
2018-01-01
The reasons for the emergence of the angular distribution of photons generated as a result of annihilation of an electron and a positron in a positron-emission tomograph are investigated. It is shown that the angular distribution of the radiation intensity (i.e., the probability of photon emission at different angles) is a consequence of the Doppler effect in the center-of-mass reference system of the electron and the positron. In the reference frame attached to the electron, the angular distribution of the number of emitted photons does not exists but is replaced by the Doppler shift of the frequency of photons. The results obtained in this study make it possible to extend the potentialities of the positron-emission tomograph in the diagnostics of diseases and to obtain additional mechanical characteristics of human tissues, such as density and viscosity.
NASA Technical Reports Server (NTRS)
Press, Harry; Mazelsky, Bernard
1954-01-01
The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
Stochastic sensitivity of a bistable energy model for visual perception
NASA Astrophysics Data System (ADS)
Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev
2017-01-01
Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.
How extreme is extreme hourly precipitation?
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos
2016-04-01
The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.
NASA Astrophysics Data System (ADS)
Takayabu, Yukari; Hamada, Atsushi; Mori, Yuki; Murayama, Yuki; Liu, Chuntao; Zipser, Edward
2015-04-01
While extreme rainfall has a huge impact upon human society, the characteristics of the extreme precipitation vary from region to region. Seventeen years of three dimensional precipitation measurements from the space-borne precipitation radar equipped with the Tropical Precipitation Measurement Mission satellite enabled us to describe the characteristics of regional extreme precipitation globally. Extreme rainfall statistics are based on rainfall events defined as a set of contiguous PR rainy pixels. Regional extreme rainfall events are defined as those in which maximum near-surface rainfall rates are higher than the corresponding 99.9th percentile in each 2.5degree x2.5degree horizontal resolution grid. First, regional extreme rainfall is characterized in terms of its intensity and event size. Regions of ''intense and extensive'' extreme rainfall are found mainly over oceans near coastal areas and are likely associated with tropical cyclones and convective systems associated with the establishment of monsoons. Regions of ''intense but less extensive'' extreme rainfall are distributed widely over land and maritime continents, probably related to afternoon showers and mesoscale convective systems. Regions of ''extensive but less intense'' extreme rainfall are found almost exclusively over oceans, likely associated with well-organized mesoscale convective systems and extratropical cyclones. Secondly, regional extremes in terms of surface rainfall intensity and those in terms of convection height are compared. Conventionally, extremely tall convection is considered to contribute the largest to the intense rainfall. Comparing probability density functions (PDFs) of 99th percentiles in terms of the near surface rainfall intensity in each regional grid and those in terms of the 40dBZ echo top heights, it is found that heaviest precipitation in the region is not associated with tallest systems, but rather with systems with moderate heights. Interestingly, this separation of extremely heavy precipitation from extremely tall convection is found to be quite universal, irrespective of regions. Rainfall characteristics and environmental conditions both indicate the importance of warm-rain processes in producing extreme rainfall rates. Thus it is demonstrated that, even in regions where severe convective storms are representative extreme weather events, the heaviest rainfall events are mostly associated with less intense convection. Third, the size effect of rainfall events on the precipitation intensity is investigated. Comparisons of normalized PDFs of foot-print size rainfall intensity for different sizes of rainfall events show that footprint-scale extreme rainfall becomes stronger as the rainfall events get larger. At the same time, stratiform ratio in area as well as in rainfall amount increases with the size, confirming larger sized features are more organized systems. After all, it is statistically shown that organization of precipitation not only brings about an increase in extreme volumetric rainfall but also an increase in probability of the satellite footprint scale extreme rainfall.
Rings in above-threshold ionization: A quasiclassical analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewenstein, M.; Kulander, K.C.; Schafer, K.J.
1995-02-01
A generalized strong-field approximation is formulated to describe atoms interacting with intense laser fields. We apply it to determine angular distributions of electrons in above-threshold ionization (ATI). The theory treats the effects of an electron rescattering from its parent ion core in a systematic perturbation series. Probability amplitudes for ionization are interpreted in terms of quasiclassical electron trajectories. We demonstrate that contributions from the direct tunneling processes in the absence of rescattering are not sufficient to describe the observed ATI spectra. We show that the high-energy portion of the spectrum, including recently discovered rings (i.e., complex features in the angularmore » distributions of outgoing electrons) are due to rescattering processes. We compare our quasiclassical results with exact numerical solutions.« less
2014-09-18
Erdogan , 1963). 26 Paris’s Law Under a fatigue stress regime Paris’s Law relates sub-critical crack growth to stress intensity factor. The basic...Paris and Erdogan , 1963). After takeoff, the model generates a probability distribution for the crack length in that specific sortie based on the...Law is one of the most widely used fatigue crack growth models and was used in this research effort (Paris and Erdogan , 1963). Paris’s Law Under a
Optical rogue-wave-like extreme value fluctuations in fiber Raman amplifiers.
Hammani, Kamal; Finot, Christophe; Dudley, John M; Millot, Guy
2008-10-13
We report experimental observation and characterization of rogue wave-like extreme value statistics arising from pump-signal noise transfer in a fiber Raman amplifier. Specifically, by exploiting Raman amplification with an incoherent pump, the amplified signal is shown to develop a series of temporal intensity spikes whose peak power follows a power-law probability distribution. The results are interpreted using a numerical model of the Raman gain process using coupled nonlinear Schrödinger equations, and the numerical model predicts results in good agreement with experiment.
Adaptive Optics Communications Performance Analysis
NASA Technical Reports Server (NTRS)
Srinivasan, M.; Vilnrotter, V.; Troy, M.; Wilson, K.
2004-01-01
The performance improvement obtained through the use of adaptive optics for deep-space communications in the presence of atmospheric turbulence is analyzed. Using simulated focal-plane signal-intensity distributions, uncoded pulse-position modulation (PPM) bit-error probabilities are calculated assuming the use of an adaptive focal-plane detector array as well as an adaptively sized single detector. It is demonstrated that current practical adaptive optics systems can yield performance gains over an uncompensated system ranging from approximately 1 dB to 6 dB depending upon the PPM order and background radiation level.
NASA Astrophysics Data System (ADS)
Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.
2018-06-01
In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiun-Dar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa
2016-01-01
The Goddard microphysics was recently improved by adding a fourth ice class (frozen dropshail). This new 4ICE scheme was developed and tested in the Goddard Cumulus Ensemble (GCE) model for an intense continental squall line and a moderate, less organized continental case. Simulated peak radar reflectivity profiles were improved in intensity and shape for both cases, as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified-Weather Research and Forecasting (NU-WRF) model, modified and evaluated for the same intense squall line, which occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). NU-WRF simulated radar reflectivities, total rainfall, propagation, and convective system structures using the 4ICE scheme modified herein agree as well as or significantly better with observations than the original 4ICE and two previous 3ICE (graupel or hail) versions of the Goddard microphysics. With the modified 4ICE, the bin microphysics-based rain evaporation correction improves propagation and in conjunction with eliminating the unrealistic dry collection of icesnow by hail can replicate the erect, narrow, and intense convective cores. Revisions to the ice supersaturation, ice number concentration formula, and snow size mapping, including a new snow breakup effect, allow the modified 4ICE to produce a stronger, better organized system, more snow, and mimic the strong aggregation signature in the radar distributions. NU-WRF original 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive domain and lateral boundaries.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Global Distribution of Density Irregularities in the Equatorial Ionosphere
NASA Technical Reports Server (NTRS)
Kil, Hyosub; Heelis, R. A.
1998-01-01
We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.
Cold Ambient Temperature Promotes Nosema spp. Intensity in Honey Bees (Apis mellifera)
Retschnig, Gina; Williams, Geoffrey R.; Schneeberger, Annette; Neumann, Peter
2017-01-01
Interactions between parasites and environmental factors have been implicated in the loss of managed Western honey bee (=HB, Apis mellifera) colonies. Although laboratory data suggest that cold temperature may limit the spread of Nosema ceranae, an invasive species and now ubiquitous endoparasite of Western HBs, the impact of weather conditions on the distribution of this microsporidian in the field is poorly understood. Here, we conducted a survey for Nosema spp. using 18 Swiss apiaries (four colonies per apiary) over a period of up to 18 months. Samples consisting of 60 workers were collected monthly from each colony to estimate Nosema spp. intensity, i.e., the number of spores in positive samples using microscopy. Ambient apiary temperature was measured daily to estimate the proportion of days enabling HB flight (>10 °C at midday). The results show that Nosema spp. intensities were negatively correlated with the proportion of days enabling HB flight, thereby suggesting a significant and unexpected positive impact of cold ambient temperature on intensities, probably via regulation of defecation opportunities for infected hosts. PMID:28208761
Kwak, Sehyun; Svensson, J; Brix, M; Ghim, Y-C
2016-02-01
A Bayesian model of the emission spectrum of the JET lithium beam has been developed to infer the intensity of the Li I (2p-2s) line radiation and associated uncertainties. The detected spectrum for each channel of the lithium beam emission spectroscopy system is here modelled by a single Li line modified by an instrumental function, Bremsstrahlung background, instrumental offset, and interference filter curve. Both the instrumental function and the interference filter curve are modelled with non-parametric Gaussian processes. All free parameters of the model, the intensities of the Li line, Bremsstrahlung background, and instrumental offset, are inferred using Bayesian probability theory with a Gaussian likelihood for photon statistics and electronic background noise. The prior distributions of the free parameters are chosen as Gaussians. Given these assumptions, the intensity of the Li line and corresponding uncertainties are analytically available using a Bayesian linear inversion technique. The proposed approach makes it possible to extract the intensity of Li line without doing a separate background subtraction through modulation of the Li beam.
DOE R&D Accomplishments Database
Sibener, S. J.; Lee, Y. T.
1978-05-01
An experiment was performed which confirms the existence of an internal mode dependence of molecular sticking probabilities for collisions of molecules with a cold surface. The scattering of a velocity selected effusive beam of CCl{sub 4} from a 90 K CC1{sub 4} ice surface has been studied at five translational velocities and for two different internal temperatures. At a surface temperature of 90 K (approx. 99% sticking probability) a four fold increase in reflected intensity was observed for the internally excited (560 K) CC1{sub 4} relative to the room temperature (298 K) CC1{sub 4} at a translational velocity of 2.5 X 10{sup 4} cm/sec. For a surface temperature of 90 K all angular distributions were found to peak 15{sup 0} superspecularly independent of incident velocity.
NASA Astrophysics Data System (ADS)
Mokem Fokou, I. S.; Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.
2018-03-01
In this manuscript, a hybrid energy harvesting system combining piezoelectric and electromagnetic transduction and subjected to colored noise is investigated. By using the stochastic averaging method, the stationary probability density functions of amplitudes are obtained and reveal interesting dynamics related to the long term behavior of the device. From stationary probability densities, we discuss the stochastic bifurcation through the qualitative change which shows that noise intensity, correlation time and other system parameters can be treated as bifurcation parameters. Numerical simulations are made for a comparison with analytical findings. The Mean first passage time (MFPT) is numerical provided in the purpose to investigate the system stability. By computing the Mean residence time (TMR), we explore the stochastic resonance phenomenon; we show how it is related to the correlation time of colored noise and high output power.
Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data
NASA Technical Reports Server (NTRS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.;
2016-01-01
We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiundar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa
2015-01-01
The Goddard microphysics scheme was recently improved by adding a 4th ice class (frozen dropshail). This new 4ICE scheme was implemented and tested in the Goddard Cumulus Ensemble model (GCE) for an intense continental squall line and a moderate,less-organized continental case. Simulated peak radar reflectivity profiles were improved both in intensity and shape for both cases as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified - Weather Research and Forecasting model (NU-WRF) and tested on an intense mesoscale convective system that occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). The NU42WRF simulated radar reflectivities, rainfall intensities, and vertical and horizontal structure using the new 4ICE scheme agree as well as or significantly better with observations than when using previous versions of the Goddard 3ICE (graupel or hail) schemes. In the 4ICE scheme, the bin microphysics-based rain evaporation correction produces more erect convective cores, while modification of the unrealistic collection of ice by dry hail produces narrow and intense cores, allowing more slow-falling snow to be transported rearward. Together with a revised snow size mapping, the 4ICE scheme produces a more horizontally stratified trailing stratiform region with a broad, more coherent light rain area. In addition, the NU-WRF 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive open lateral boundaries
Battista, Jerry J; Johnson, Carol; Turnbull, David; Kempe, Jeff; Bzdusek, Karl; Van Dyk, Jacob; Bauman, Glenn
2013-12-01
To examine a range of scenarios for image-guided adaptive radiation therapy of prostate cancer, including different schedules for megavoltage CT imaging, patient repositioning, and dose replanning. We simulated multifraction dose distributions with deformable registration using 35 sets of megavoltage CT scans of 13 patients. We computed cumulative dose-volume histograms, from which tumor control probabilities and normal tissue complication probabilities (NTCPs) for rectum were calculated. Five-field intensity modulated radiation therapy (IMRT) with 18-MV x-rays was planned to achieve an isocentric dose of 76 Gy to the clinical target volume (CTV). The differences between D95, tumor control probability, V70Gy, and NTCP for rectum, for accumulated versus planned dose distributions, were compared for different target volume sizes, margins, and adaptive strategies. The CTV D95 for IMRT treatment plans, averaged over 13 patients, was 75.2 Gy. Using the largest CTV margins (10/7 mm), the D95 values accumulated over 35 fractions were within 2% of the planned value, regardless of the adaptive strategy used. For tighter margins (5 mm), the average D95 values dropped to approximately 73.0 Gy even with frequent repositioning, and daily replanning was necessary to correct this deficit. When personalized margins were applied to an adaptive CTV derived from the first 6 treatment fractions using the STAPLE (Simultaneous Truth and Performance Level Estimation) algorithm, target coverage could be maintained using a single replan 1 week into therapy. For all approaches, normal tissue parameters (rectum V(70Gy) and NTCP) remained within acceptable limits. The frequency of adaptive interventions depends on the size of the CTV combined with target margins used during IMRT optimization. The application of adaptive target margins (<5 mm) to an adaptive CTV determined 1 week into therapy minimizes the need for subsequent dose replanning. Copyright © 2013 Elsevier Inc. All rights reserved.
Probabilistic clustering of rainfall condition for landslide triggering
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Luciani, Silvia; Cesare Mondini, Alessandro; Kirschbaum, Dalia; Valigi, Daniela; Guzzetti, Fausto
2013-04-01
Landslides are widespread natural and man made phenomena. They are triggered by earthquakes, rapid snow melting, human activities, but mostly by typhoons and intense or prolonged rainfall precipitations. In Italy mostly they are triggered by intense precipitation. The prediction of landslide triggered by rainfall precipitations over large areas is commonly based on the exploitation of empirical models. Empirical landslide rainfall thresholds are used to identify rainfall conditions for the possible landslide initiation. It's common practice to define rainfall thresholds by assuming a power law lower boundary in the rainfall intensity-duration or cumulative rainfall-duration space above which landslide can occur. The boundary is defined considering rainfall conditions associated to landslide phenomena using heuristic approaches, and doesn't consider rainfall events not causing landslides. Here we present a new fully automatic method to identify the probability of landslide occurrence associated to rainfall conditions characterized by measures of intensity or cumulative rainfall and rainfall duration. The method splits the rainfall events of the past in two groups: a group of events causing landslides and its complementary, then estimate their probabilistic distributions. Next, the probabilistic membership of the new event to one of the two clusters is estimated. The method doesn't assume a priori any threshold model, but simple exploits the real empirical distribution of rainfall events. The approach was applied in the Umbria region, Central Italy, where a catalogue of landslide timing, were obtained through the search of chronicles, blogs and other source of information in the period 2002-2012. The approach was tested using rain gauge measures and satellite rainfall estimates (NASA TRMM-v6), allowing in both cases the identification of the rainfall condition triggering landslides in the region. Compared to the other existing threshold definition methods, the prosed one (i) largely reduces the subjectivity in the choice of the threshold model and in how it is calculated, and (ii) it can be easier set-up in other study areas. The proposed approach can be conveniently integrated in existing early-warning system to improve the accuracy of the estimation of the real landslide occurrence probability associated to rainfall events and its uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkán-Kacsó, Sándor
2014-06-14
A theoretical method is proposed for the calculation of the photon counting probability distribution during a bin time. Two-state fluorescence and steady excitation are assumed. A key feature is a kinetic scheme that allows for an extensive class of stochastic waiting time distribution functions, including power laws, expanded as a sum of weighted decaying exponentials. The solution is analytic in certain conditions, and an exact and simple expression is found for the integral contribution of “bright” and “dark” states. As an application for power law kinetics, theoretical results are compared with experimental intensity histograms from a number of blinking CdSe/ZnSmore » quantum dots. The histograms are consistent with distributions of intensity states around a “bright” and a “dark” maximum. A gap of states is also revealed in the more-or-less flat inter-peak region. The slope and to some extent the flatness of the inter-peak feature are found to be sensitive to the power-law exponents. Possible models consistent with these findings are discussed, such as the combination of multiple charging and fluctuating non-radiative channels or the multiple recombination center model. A fitting of the latter to experiment provides constraints on the interaction parameter between the recombination centers. Further extensions and applications of the photon counting theory are also discussed.« less
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
Quantifying the effect of forests on frequency and intensity of rockfalls
NASA Astrophysics Data System (ADS)
Moos, Christine; Dorren, Luuk; Stoffel, Markus
2017-02-01
Forests serve as a natural means of protection against small rockfalls. Due to their barrier effect, they reduce the intensity and the propagation probability of falling rocks and thus reduce the occurrence frequency of a rockfall event for a given element at risk. However, despite established knowledge on the protective effect of forests, they are generally neglected in quantitative rockfall risk analyses. Their inclusion in quantitative rockfall risk assessment would, however, be necessary to express their efficiency in monetary terms and to allow comparison of forests with other protective measures, such as nets and dams. The goal of this study is to quantify the effect of forests on the occurrence frequency and intensity of rockfalls. We therefore defined an onset frequency of blocks based on a power-law magnitude-frequency distribution and determined their propagation probabilities on a virtual slope based on rockfall simulations. Simulations were run for different forest and non-forest scenarios under varying forest stand and terrain conditions. We analysed rockfall frequencies and intensities at five different distances from the release area. Based on two multivariate statistical prediction models, we investigated which of the terrain and forest characteristics predominantly drive the role of forest in reducing rockfall occurrence frequency and intensity and whether they are able to predict the effect of forest on rockfall risk. The rockfall occurrence frequency below forested slopes is reduced between approximately 10 and 90 % compared to non-forested slope conditions; whereas rockfall intensity is reduced by 10 to 70 %. This reduction increases with increasing slope length and decreases with decreasing tree density, tree diameter and increasing rock volume, as well as in cases of clustered or gappy forest structures. The statistical prediction models reveal that the cumulative basal area of trees, block volume and horizontal forest structure represent key variables for the prediction of the protective effect of forests. In order to validate these results, models have to be tested on real slopes with a wide variation of terrain and forest conditions.
Origin and fate of nanoparticles in marine water - Preliminary results.
Graca, Bożena; Zgrundo, Aleksandra; Zakrzewska, Danuta; Rzodkiewicz, Monika; Karczewski, Jakub
2018-05-05
The number, morphology and elemental composition of nanoparticles (<100 nm) in marine water was investigated using Variable Pressure Scanning Electron Microscopy (VP-SEM) and Energy-dispersive X-ray spectroscopy (EDS). Preliminary research conducted in the Baltic Sea showed that the number of nanoparticles in seawater varied from undetectable to 380 (x10 2 ) cm -3 . Wind mixing and density barriers (thermocline) had a significant impact on the abundance and distribution of nanoparticles in water. Many more nanoparticles (mainly nanofibers) were detected in periods of intensive primary production and thermal stratification of water than at the end of the growing season and during periods of strong wind mixing. Temporal and spatial variability of nanoparticles as well as air mass trajectories indicated that the analysed nanofibers were both autochthonous and allochthonous (atmospheric), while the nanospheres were mainly autochthonous. Chemical composition of most of analysed nanoparticles indicates their autochthonous, natural (biogenic/geogenic) origin. Silica nanofibers (probably the remains of flagellates), nanofibers composed of manganese and iron oxides (probably of microbial origin), and pyrite nanospheres (probable formed in anoxic sediments), were all identified in the samples. Only asbestos nanofibers, which were also detected, are probably allochthonous and anthropogenic. Copyright © 2018 Elsevier Ltd. All rights reserved.
England, Marion E; Phipps, Paul; Medlock, Jolyon M; Atkinson, Peter M; Atkinson, Barry; Hewson, Roger; Gale, Paul
2016-06-01
Crimean-Congo haemorrhagic fever virus (CCHFV) is a zoonotic virus transmitted by Hyalomma ticks, the immature stages of which may be carried by migratory birds. In this study, a total of 12 Hyalomma ticks were recovered from five of 228 migratory birds trapped in Spring, 2012 in southern Spain along the East Atlantic flyway. All collected ticks tested negative for CCHFV. While most birds had zero Hyalomma ticks, two individuals had four and five ticks each and the statistical distribution of Hyalomma tick counts per bird is over-dispersed compared to the Poisson distribution, demonstrating the need for intensive sampling studies to avoid underestimating the total number of ticks. Rates of tick exchange on migratory birds during their northwards migration will affect the probability that a Hyalomma tick entering Great Britain is positive for CCHFV. Drawing on published data, evidence is presented that the latitude of a European country affects the probability of entry of Hyalomma ticks on wild birds. Further data on Hyalomma infestation rates and tick exchange rates are required along the East Atlantic flyway to further our understanding of the origin of Hyalomma ticks (i.e., Africa or southern Europe) and hence the probability of entry of CCHFV into GB. © 2016 The Society for Vector Ecology.
Temporal complexity in emission from Anderson localized lasers
NASA Astrophysics Data System (ADS)
Kumar, Randhir; Balasubrahmaniyam, M.; Alee, K. Shadak; Mujumdar, Sushil
2017-12-01
Anderson localization lasers exploit resonant cavities formed due to structural disorder. The inherent randomness in the structure of these cavities realizes a probability distribution in all cavity parameters such as quality factors, mode volumes, mode structures, and so on, implying resultant statistical fluctuations in the temporal behavior. Here we provide direct experimental measurements of temporal width distributions of Anderson localization lasing pulses in intrinsically and extrinsically disordered coupled-microresonator arrays. We first illustrate signature exponential decays in the spatial intensity distributions of the lasing modes that quantify their localized character, and then measure the temporal width distributions of the pulsed emission over several configurations. We observe a dependence of temporal widths on the disorder strength, wherein the widths show a single-peaked, left-skewed distribution in extrinsic disorder and a dual-peaked distribution in intrinsic disorder. We propose a model based on coupled rate equations for an emitter and an Anderson cavity with a random mode structure, which gives excellent quantitative and qualitative agreement with the experimental observations. The experimental and theoretical analyses bring to the fore the temporal complexity in Anderson-localization-based lasing systems.
Spatial distribution on high-order-harmonic generation of an H2+ molecule in intense laser fields
NASA Astrophysics Data System (ADS)
Zhang, Jun; Ge, Xin-Lei; Wang, Tian; Xu, Tong-Tong; Guo, Jing; Liu, Xue-Shen
2015-07-01
High-order-harmonic generation (HHG) for the H2 + molecule in a 3-fs, 800-nm few-cycle Gaussian laser pulse combined with a static field is investigated by solving the one-dimensional electronic and one-dimensional nuclear time-dependent Schrödinger equation within the non-Born-Oppenheimer approximation. The spatial distribution in HHG is demonstrated and the results present the recombination process of the electron with the two nuclei, respectively. The spatial distribution of the HHG spectra shows that there is little possibility of the recombination of the electron with the nuclei around the origin z =0 a.u. and equilibrium internuclear positions z =±1.3 a.u. This characteristic is irrelevant to laser parameters and is only attributed to the molecular structure. Furthermore, we investigate the time-dependent electron-nuclear wave packet and ionization probability to further explain the underlying physical mechanism.
Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon
2010-10-01
An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.
Quantum illumination for enhanced detection of Rayleigh-fading targets
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.
Yokoya, Masana; Shimizu, Hideyasu; Higuchi, Yukito
2012-01-01
The height of Japanese youth raised in the northern region tends to be greater than that of youth raised in the southern region; therefore, a geographical gradient in youth body height exists. Although this gradient has existed for about 100 years, the reasons for it remain unclear. Consideration of the nutritional improvement, economic growth, and intense migration that has occurred in this period indicates that it is probably the result of environmental rather than nutritional or genetic factors. To identify possible environmental factors, ecological analysis of prefecture-level data on the body size of 8- to 17-year-old youth averaged over a 13-year period (1996 to 2008) and Japanese mesh climatic data on the climatic variables of temperature, solar radiation, and effective day length (duration of photoperiod exceeding the threshold of light intensity) was performed. The geographical distribution of the standardized height of Japanese adolescents was found to be inversely correlated to a great extent with the distribution of effective day length at a light intensity greater than 4000 lx. The results of multiple regression analysis of effective day length, temperature, and weight (as an index of food intake) indicated that a combination of effective day length and weight was statistically significant as predictors of height in early adolescence; however, only effective day length was statistically significant as a predictor of height in late adolescence. Day length may affect height by affecting the secretion of melatonin, a hormone that inhibits sexual and skeletal maturation, which in turn induces increases in height. By affecting melatonin production, regional differences in the duration of the photoperiod may lead to regional differences in height. Exposure to light intensity greater than 4000 lx appears to be the threshold at which light intensity begins to affect the melatonin secretion of humans who spend much of their time indoors. PMID:23227226
Characterization and disaggregation of daily rainfall in the Upper Blue Nile Basin in Ethiopia
NASA Astrophysics Data System (ADS)
Engida, Agizew N.; Esteves, Michel
2011-03-01
SummaryIn Ethiopia, available rainfall records are mainly limited to daily time steps. Though rainfall data at shorter time steps are important for various purposes like modeling of erosion processes and flood hydrographs, they are hardly available in Ethiopia. The objectives of this study were (i) to study the temporal characteristics of daily rains at two stations in the region of the Upper Blue Nile Basin (UBNB) and (ii) to calibrate and evaluate a daily rainfall disaggregation model. The analysis was based on rainfall data of Bahir Dar and Gonder Meteorological Stations. The disaggregation model used was the Modified Bartlett-Lewis Rectangular Pulse Model (MBLRPM). The mean daily rainfall intensity varied from about 4 mm in the dry season to 17 mm in the wet season with corresponding variation in raindays of 0.4-26 days. The observed maximum daily rainfall varied from 13 mm in the dry month to 200 mm in the wet month. The average wet/dry spell length varied from 1/21 days in the dry season to 6/1 days in the rainy season. Most of the rainfall occurs in the afternoon and evening periods of the day. Daily rainfall disaggregation using the MBLRPM alone resulted in poor match between the disaggregated and observed hourly rainfalls. Stochastic redistribution of the outputs of the model using Beta probability distribution function improved the agreement between observed and calculated hourly rain intensities. In areas where convective rainfall is dominant, the outputs of MBLRPM should be redistributed using relevant probability distributions to simulate the diurnal rainfall pattern.
Emission Patterns of Solar Type III Radio Bursts: Stereoscopic Observations
NASA Technical Reports Server (NTRS)
Thejappa, G.; MacDowall, R.; Bergamo, M.
2012-01-01
Simultaneous observations of solar type III radio bursts obtained by the STEREO A, B, and WIND spacecraft at low frequencies from different vantage points in the ecliptic plane are used to determine their directivity. The heliolongitudes of the sources of these bursts, estimated at different frequencies by assuming that they are located on the Parker spiral magnetic field lines emerging from the associated active regions into the spherically symmetric solar atmosphere, and the heliolongitudes of the spacecraft are used to estimate the viewing angle, which is the angle between the direction of the magnetic field at the source and the line connecting the source to the spacecraft. The normalized peak intensities at each spacecraft Rj = Ij /[Sigma]Ij (the subscript j corresponds to the spacecraft STEREO A, B, and WIND), which are defined as the directivity factors are determined using the time profiles of the type III bursts. It is shown that the distribution of the viewing angles divides the type III bursts into: (1) bursts emitting into a very narrow cone centered around the tangent to the magnetic field with angular width of approximately 2 deg and (2) bursts emitting into a wider cone with angular width spanning from [approx] -100 deg to approximately 100 deg. The plots of the directivity factors versus the viewing angles of the sources from all three spacecraft indicate that the type III emissions are very intense along the tangent to the spiral magnetic field lines at the source, and steadily fall as the viewing angles increase to higher values. The comparison of these emission patterns with the computed distributions of the ray trajectories indicate that the intense bursts visible in a narrow range of angles around the magnetic field directions probably are emitted in the fundamental mode, whereas the relatively weaker bursts visible to a wide range of angles are probably emitted in the harmonic mode.
Lee, T-F; Ting, H-M; Chao, P-J; Wang, H-Y; Shieh, C-S; Horng, M-F; Wu, J-M; Yeh, S-A; Cho, M-Y; Huang, E-Y; Huang, Y-J; Chen, H-C; Fang, F-M
2012-01-01
Objective We compared and evaluated the differences between two models for treating bilateral breast cancer (BBC): (i) dose–volume-based intensity-modulated radiation treatment (DV plan), and (ii) dose–volume-based intensity-modulated radiotherapy with generalised equivalent uniform dose-based optimisation (DV-gEUD plan). Methods The quality and performance of the DV plan and DV-gEUD plan using the Pinnacle3® system (Philips, Fitchburg, WI) were evaluated and compared in 10 patients with stage T2–T4 BBC. The plans were delivered on a Varian 21EX linear accelerator (Varian Medical Systems, Milpitas, CA) equipped with a Millennium 120 leaf multileaf collimator (Varian Medical Systems). The parameters analysed included the conformity index, homogeneity index, tumour control probability of the planning target volume (PTV), the volumes V20 Gy and V30 Gy of the organs at risk (OAR, including the heart and lungs), mean dose and the normal tissue complication probability. Results Both plans met the requirements for the coverage of PTV with similar conformity and homogeneity indices. However, the DV-gEUD plan had the advantage of dose sparing for OAR: the mean doses of the heart and lungs, lung V20 Gy, and heart V30 Gy in the DV-gEUD plan were lower than those in the DV plan (p<0.05). Conclusions A better result can be obtained by starting with a DV-generated plan and then improving it by adding gEUD-based improvements to reduce the number of iterations and to improve the optimum dose distribution. Advances to knowledge The DV-gEUD plan provided superior dosimetric results for treating BBC in terms of PTV coverage and OAR sparing than the DV plan, without sacrificing the homogeneity of dose distribution in the PTV. PMID:23091290
NASA Astrophysics Data System (ADS)
Shioiri, Tetsu; Asari, Naoki; Sato, Junichi; Sasage, Kosuke; Yokokura, Kunio; Homma, Mitsutaka; Suzuki, Katsumi
To investigate the reliability of equipment of vacuum insulation, a study was carried out to clarify breakdown probability distributions in vacuum gap. Further, a double-break vacuum circuit breaker was investigated for breakdown probability distribution. The test results show that the breakdown probability distribution of the vacuum gap can be represented by a Weibull distribution using a location parameter, which shows the voltage that permits a zero breakdown probability. The location parameter obtained from Weibull plot depends on electrode area. The shape parameter obtained from Weibull plot of vacuum gap was 10∼14, and is constant irrespective non-uniform field factor. The breakdown probability distribution after no-load switching can be represented by Weibull distribution using a location parameter. The shape parameter after no-load switching was 6∼8.5, and is constant, irrespective of gap length. This indicates that the scatter of breakdown voltage was increased by no-load switching. If the vacuum circuit breaker uses a double break, breakdown probability at low voltage becomes lower than single-break probability. Although potential distribution is a concern in the double-break vacuum cuicuit breaker, its insulation reliability is better than that of the single-break vacuum interrupter even if the bias of the vacuum interrupter's sharing voltage is taken into account.
Cook, D A
2006-04-01
Models that estimate the probability of death of intensive care unit patients can be used to stratify patients according to the severity of their condition and to control for casemix and severity of illness. These models have been used for risk adjustment in quality monitoring, administration, management and research and as an aid to clinical decision making. Models such as the Mortality Prediction Model family, SAPS II, APACHE II, APACHE III and the organ system failure models provide estimates of the probability of in-hospital death of ICU patients. This review examines methods to assess the performance of these models. The key attributes of a model are discrimination (the accuracy of the ranking in order of probability of death) and calibration (the extent to which the model's prediction of probability of death reflects the true risk of death). These attributes should be assessed in existing models that predict the probability of patient mortality, and in any subsequent model that is developed for the purposes of estimating these probabilities. The literature contains a range of approaches for assessment which are reviewed and a survey of the methodologies used in studies of intensive care mortality models is presented. The systematic approach used by Standards for Reporting Diagnostic Accuracy provides a framework to incorporate these theoretical considerations of model assessment and recommendations are made for evaluation and presentation of the performance of models that estimate the probability of death of intensive care patients.
Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis
Heard, Nicholas A.; Turcotte, Melissa J. M.
2016-05-21
Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less
A Probability Model of Decompression Sickness at 4.3 Psia after Exercise Prebreathe
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Gernhardt, Michael L.; Powell, Michael R.; Pollock, Neal
2004-01-01
Exercise PB can reduce the risk of decompression sickness on ascent to 4.3 psia when performed at the proper intensity and duration. Data are from seven tests. PB times ranged from 90 to 150 min. High intensity, short duration dual-cycle ergometry was done during the PB. This was done alone, or combined with intermittent low intensity exercise or periods of rest for the remaining PB. Nonambulating men and women performed light exercise from a semi-recumbent position at 4.3 psia for four hrs. The Research Model with age tested the probability that DCS increases with advancing age. The NASA Model with gender hypothesized that the probability of DCS increases if gender is female. Accounting for exercise and rest during PB with a variable half-time compartment for computed tissue N2 pressure advances our probability modeling of hypobaric DCS. Both models show that a small increase in exercise intensity during PB reduces the risk of DCS, and a larger increase in exercise intensity dramatically reduces risk. These models support the hypothesis that aerobic fitness is an important consideration for the risk of hypobaric DCS when exercise is performed during the PB.
Garriguet, Didier
2016-04-01
Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan
2005-01-01
Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC) Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years worth of CGLSS data into a database of near strikes. Using shuffle launch platform LP39A as a convenient central point, all strikes recorded within a 20-mile radius for the period of record O R ) from January 1, 1993 to December 31,2002 were included in the subset database. Histograms and cumulative probability curves are produced for both strike intensity (peak current, in kA) and the corresponding magnetic inductance fields (in A/m). Results for the full POR have application to launch operations lightning monitoring and post-strike test procedures.
Martin, Julien; Runge, Michael C.; Flewelling, Leanne J.; Deutsch, Charles J.; Landsberg, Jan H.
2017-11-20
Red tides (blooms of the harmful alga Karenia brevis) are one of the major sources of mortality for the Florida manatee (Trichechus manatus latirostris), especially in southwest Florida. It has been hypothesized that the frequency and severity of red tides may increase in the future because of global climate change and other factors. To improve our ecological forecast for the effects of red tides on manatee population dynamics and long-term persistence, we conducted a formal expert judgment process to estimate probability distributions for the frequency and relative magnitude of red-tide-related manatee mortality (RTMM) events over a 100-year time horizon in three of the four regions recognized as manatee management units in Florida. This information was used to update a population viability analysis for the Florida manatee (the Core Biological Model). We convened a panel of 12 experts in manatee biology or red-tide ecology; the panel met to frame, conduct, and discuss the elicitation. Each expert provided a best estimate and plausible low and high values (bounding a confidence level of 80 percent) for each parameter in each of three regions (Northwest, Southwest, and Atlantic) of the subspecies’ range (excluding the Upper St. Johns River region) for two time periods (0−40 and 41−100 years from present). We fitted probability distributions for each parameter, time period, and expert by using these three elicited values. We aggregated the parameter estimates elicited from individual experts and fitted a parametric distribution to the aggregated results.Across regions, the experts expected the future frequency of RTMM events to be higher than historical levels, which is consistent with the hypothesis that global climate change (among other factors) may increase the frequency of red-tide blooms. The experts articulated considerable uncertainty, however, about the future frequency of RTMM events. The historical frequency of moderate and intense RTMM (combined) in the Southwest region was 0.35 (80-percent confidence interval [CI]: 0.21−0.52), whereas the forecast probability was 0.48 (80-percent CI: 0.30−0.64) over a 40-year projected time horizon. Moderate and intense RTMM events are expected to continue to be most frequent in the Southwest region, to increase in mean frequency in the Northwest region (historical frequency of moderate and intense RTMM events [combined] in the Northwest region was 0, whereas the forecast probability was 0.12 [80-percent CI: 0.02−0.39] over a 40-year projected time horizon) and in the Atlantic region (historical frequency of moderate and intense RTMM events [combined] in the Atlantic region was 0.05 [80-percent CI: 0.005–0.18], whereas the forecast probability was 0.11 [80-percent CI: 0.03−0.25] over a 40-year projected time horizon), and to remain absent from the Upper St. Johns River region. The impact of red-tide blooms on manatee mortality has been measured for the Southwest region but not for the Northwest and Atlantic regions, where such events have been rare. The expert panel predicted that the median magnitude of RTMM events in the Atlantic and Northwest regions will be much smaller than that in the Southwest; given the large uncertainties, however, they acknowledged the possibility that these events could be larger in their mortality impacts than in the Southwest region. By its nature, forecasting requires expert judgment because it is impossible to have empirical evidence about the future. The large uncertainties in parameter estimates over a 100-year timeframe are to be expected and may also indicate that the training provided to panelists successfully minimized one common pitfall of expert judgment, that of overconfidence. This study has provided useful and needed inputs to the Florida manatee population viability analysis associated with an important and recurrent source of mortality from harmful algal blooms.
An economic evaluation of home management of malaria in Uganda: an interactive Markov model.
Lubell, Yoel; Mills, Anne J; Whitty, Christopher J M; Staedke, Sarah G
2010-08-27
Home management of malaria (HMM), promoting presumptive treatment of febrile children in the community, is advocated to improve prompt appropriate treatment of malaria in Africa. The cost-effectiveness of HMM is likely to vary widely in different settings and with the antimalarial drugs used. However, no data on the cost-effectiveness of HMM programmes are available. A Markov model was constructed to estimate the cost-effectiveness of HMM as compared to conventional care for febrile illnesses in children without HMM. The model was populated with data from Uganda, but is designed to be interactive, allowing the user to adjust certain parameters, including the antimalarials distributed. The model calculates the cost per disability adjusted life year averted and presents the incremental cost-effectiveness ratio compared to a threshold value. Model output is stratified by level of malaria transmission and the probability that a child would receive appropriate care from a health facility, to indicate the circumstances in which HMM is likely to be cost-effective. The model output suggests that the cost-effectiveness of HMM varies with malaria transmission, the probability of appropriate care, and the drug distributed. Where transmission is high and the probability of appropriate care is limited, HMM is likely to be cost-effective from a provider perspective. Even with the most effective antimalarials, HMM remains an attractive intervention only in areas of high malaria transmission and in medium transmission areas with a lower probability of appropriate care. HMM is generally not cost-effective in low transmission areas, regardless of which antimalarial is distributed. Considering the analysis from the societal perspective decreases the attractiveness of HMM. Syndromic HMM for children with fever may be a useful strategy for higher transmission settings with limited health care and diagnosis, but is not appropriate for all settings. HMM may need to be tailored to specific settings, accounting for local malaria transmission intensity and availability of health services.
Kornilov, Oleg; Toennies, J Peter
2008-05-21
Clusters consisting of normal H2 molecules, produced in a free jet expansion, are size selected by diffraction from a transmission nanograting prior to electron impact ionization. For each neutral cluster (H2)(N) (N=2-40), the relative intensities of the ion fragments Hn+ are measured with a mass spectrometer. H3+ is found to be the most abundant fragment up to N=17. With a further increase in N, the abundances of H3+, H5+, H7+, and H9+ first increase and, after passing through a maximum, approach each other. At N=40, they are about the same and more than a factor of 2 and 3 larger than for H11+ and H13+, respectively. For a given neutral cluster size, the intensities of the ion fragments follow a Poisson distribution. The fragmentation probabilities are used to determine the neutral cluster size distribution produced in the expansion at a source temperature of 30.1 K and a source pressure of 1.50 bar. The distribution shows no clear evidence of a magic number N=13 as predicted by theory and found in experiments with pure para-H2 clusters. The ion fragment distributions are also used to extract information on the internal energy distribution of the H3+ ions produced in the reaction H2+ + H2-->H3+ +H, which is initiated upon ionization of the cluster. The internal energy is assumed to be rapidly equilibrated and to determine the number of molecules subsequently evaporated. The internal energy distribution found in this way is in good agreement with data obtained in an earlier independent merged beam scattering experiment.
NASA Astrophysics Data System (ADS)
Anand, L. F. M.; Gudennavar, S. B.; Bubbly, S. G.; Kerur, B. R.
2015-12-01
The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K β to K α intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak 137Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others' work, establishing a good agreement.
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
Identifying Changes in the Probability of High Temperature, High Humidity Heat Wave Events
NASA Astrophysics Data System (ADS)
Ballard, T.; Diffenbaugh, N. S.
2016-12-01
Understanding how heat waves will respond to climate change is critical for adequate planning and adaptation. While temperature is the primary determinant of heat wave severity, humidity has been shown to play a key role in heat wave intensity with direct links to human health and safety. Here we investigate the individual contributions of temperature and specific humidity to extreme heat wave conditions in recent decades. Using global NCEP-DOE Reanalysis II daily data, we identify regional variability in the joint probability distribution of humidity and temperature. We also identify a statistically significant positive trend in humidity over the eastern U.S. during heat wave events, leading to an increased probability of high humidity, high temperature events. The extent to which we can expect this trend to continue under climate change is complicated due to variability between CMIP5 models, in particular among projections of humidity. However, our results support the notion that heat wave dynamics are characterized by more than high temperatures alone, and understanding and quantifying the various components of the heat wave system is crucial for forecasting future impacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng Chuan; Giantsoudi, Drosoula; Grassberger, Clemens
2013-05-15
Purpose: Biological effect of radiation can be enhanced with hypofractionation, localized dose escalation, and, in particle therapy, with optimized distribution of linear energy transfer (LET). The authors describe a method to construct inhomogeneous fractional dose (IFD) distributions, and evaluate the potential gain in the therapeutic effect from their delivery in proton therapy delivered by pencil beam scanning. Methods: For 13 cases of prostate cancer, the authors considered hypofractionated courses of 60 Gy delivered in 20 fractions. (All doses denoted in Gy include the proton's mean relative biological effectiveness (RBE) of 1.1.) Two types of plans were optimized using two opposedmore » lateral beams to deliver a uniform dose of 3 Gy per fraction to the target by scanning: (1) in conventional full-target plans (FTP), each beam irradiated the entire gland, (2) in split-target plans (STP), beams irradiated only the respective proximal hemispheres (prostate split sagittally). Inverse planning yielded intensity maps, in which discrete position control points of the scanned beam (spots) were assigned optimized intensity values. FTP plans preferentially required a higher intensity of spots in the distal part of the target, while STP, by design, employed proximal spots. To evaluate the utility of IFD delivery, IFD plans were generated by rearranging the spot intensities from FTP or STP intensity maps, separately as well as combined using a variety of mixing weights. IFD courses were designed so that, in alternating fractions, one of the hemispheres of the prostate would receive a dose boost and the other receive a lower dose, while the total physical dose from the IFD course was roughly uniform across the prostate. IFD plans were normalized so that the equivalent uniform dose (EUD) of rectum and bladder did not increase, compared to the baseline FTP plan, which irradiated the prostate uniformly in every fraction. An EUD-based model was then applied to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). To assess potential local RBE variations, LET distributions were calculated with Monte Carlo, and compared for different plans. The results were assessed in terms of their sensitivity to uncertainties in model parameters and delivery. Results: IFD courses included equal number of fractions boosting either hemisphere, thus, the combined physical dose was close to uniform throughout the prostate. However, for the entire course, the prostate EUD in IFD was higher than in conventional FTP by up to 14%, corresponding to the estimated increase in TCP to 96% from 88%. The extent of gain depended on the mixing factor, i.e., relative weights used to combine FTP and STP spot weights. Increased weighting of STP typically yielded a higher target EUD, but also led to increased sensitivity of dose to variations in the proton's range. Rectal and bladder EUD were same or lower (per normalization), and the NTCP for both remained below 1%. The LET distributions in IFD also depended strongly on the mixing weights: plans using higher weight of STP spots yielded higher LET, indicating a potentially higher local RBE. Conclusions: In proton therapy delivered by pencil beam scanning, improved therapeutic outcome can potentially be expected with delivery of IFD distributions, while administering the prescribed quasi-uniform dose to the target over the entire course. The biological effectiveness of IFD may be further enhanced by optimizing the LET distributions. IFD distributions are characterized by a dose gradient located in proximity of the prostate's midplane, thus, the fidelity of delivery would depend crucially on the precision with which the proton range could be controlled.« less
NASA Astrophysics Data System (ADS)
Podladchikova, O.; Lefebvre, B.; Krasnoselskikh, V.; Podladchikov, V.
An important task for the problem of coronal heating is to produce reliable evaluation of the statistical properties of energy release and eruptive events such as micro-and nanoflares in the solar corona. Different types of distributions for the peak flux, peak count rate measurements, pixel intensities, total energy flux or emission measures increases or waiting times have appeared in the literature. This raises the question of a precise evaluation and classification of such distributions. For this purpose, we use the method proposed by K. Pearson at the beginning of the last century, based on the relationship between the first 4 moments of the distribution. Pearson's technique encompasses and classifies a broad range of distributions, including some of those which have appeared in the literature about coronal heating. This technique is successfully applied to simulated data from the model of Krasnoselskikh et al. (2002). It allows to provide successful fits to the empirical distributions of the dissipated energy, and to classify them as a function of model parameters such as dissipation mechanisms and threshold.
Chaotic oscillations and noise transformations in a simple dissipative system with delayed feedback
NASA Astrophysics Data System (ADS)
Zverev, V. V.; Rubinstein, B. Ya.
1991-04-01
We analyze the statistical behavior of signals in nonlinear circuits with delayed feedback in the presence of external Markovian noise. For the special class of circuits with intense phase mixing we develop an approach for the computation of the probability distributions and multitime correlation functions based on the random phase approximation. Both Gaussian and Kubo-Andersen models of external noise statistics are analyzed and the existence of the stationary (asymptotic) random process in the long-time limit is shown. We demonstrate that a nonlinear system with chaotic behavior becomes a noise amplifier with specific statistical transformation properties.
Probability hazard map for future vent opening at Etna volcano (Sicily, Italy).
NASA Astrophysics Data System (ADS)
Brancato, Alfonso; Tusa, Giuseppina; Coltelli, Mauro; Proietti, Cristina
2014-05-01
Mount Etna is a composite stratovolcano located along the Ionian coast of eastern Sicily. The frequent flank eruptions occurrence (at an interval of years, mostly concentrated along the NE, S and W rift zones) lead to a high volcanic hazard that, linked with intense urbanization, poses a high volcanic risk. A long-term volcanic hazard assessment, mainly based on the past behaviour of the Etna volcano, is the basic tool for the evaluation of this risk. Then, a reliable forecast where the next eruption will occur is needed. A computer-assisted analysis and probabilistic evaluations will provide the relative map, thus allowing identification of the areas prone to the highest hazard. Based on these grounds, the use of a code such BET_EF (Bayesian Event Tree_Eruption Forecasting) showed that a suitable analysis can be explored (Selva et al., 2012). Following an analysis we are performing, a total of 6886 point-vents referring to the last 4.0 ka of Etna flank activity, and spread over an area of 744 km2 (divided into N=2976 squared cell, with side of 500 m), allowed us to estimate a pdf by applying a Gaussian kernel. The probability values represent a complete set of outcomes mutually exclusive and the relative sum is normalized to one over the investigated area; then, the basic assumptions of a Dirichlet distribution (the prior distribution set in the BET_EF code (Marzocchi et al., 2004, 2008)) still hold. One fundamental parameter is the the equivalent number of data, that depicts our confidence on the best guess probability. The BET_EF code also works with a likelihood function. This is modelled by a Multinomial distribution, with parameters representing the number of vents in each cell and the total number of past data (i.e. the 6886 point-vents). Given the grid of N cells, the final posterior distribution will be evaluated by multiplying the a priori Dirichlet probability distribution with the past data in each cell through the likelihood. The probability hazard map shows a tendency to concentrate along the NE and S rifts, as well as Valle del Bove, increasing the difference in probability between these areas and the rest of the volcano edifice. It is worthy notice that a higher significance is still evident along the W rift, even if not comparable with the ones of the above mentioned areas. References Marzocchi W., Sandri L., Gasparini P., Newhall C. and Boschi E.; 2004: Quantifying probabilities of volcanic events: The example of volcanic hazard at Mount Vesuvius, J. Geophys. Res., 109, B11201, doi:10.1029/2004JB00315U. Marzocchi W., Sandri, L. and Selva, J.; 2008: BET_EF: a probabilistic tool for long- and short-term eruption forecasting, Bull. Volcanol., 70, 623 - 632, doi: 10.1007/s00445-007-0157-y. Selva J., Orsi G., Di Vito M.A., Marzocchi W. And Sandri L.; 2012: Probability hazard mapfor future vent opening atthe Campi Flegrei caldera, Italy, Bull. Volcanol., 74, 497 - 510, doi: 10.1007/s00445-011-0528-2.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Tellez, Jason A; Schmidt, Jason D
2011-08-20
The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America
Comparison of treatment plans: a retrospective study by the method of radiobiological evaluation
NASA Astrophysics Data System (ADS)
Puzhakkal, Niyas; Kallikuzhiyil Kochunny, Abdullah; Manthala Padannayil, Noufal; Singh, Navin; Elavan Chalil, Jumanath; Kulangarakath Umer, Jamshad
2016-09-01
There are many situations in radiotherapy where multiple treatment plans need to be compared for selection of an optimal plan. In this study we performed the radiobiological method of plan evaluation to verify the treatment plan comparison procedure of our clinical practice. We estimated and correlated various radiobiological dose indices with physical dose metrics for a total of 30 patients representing typical cases of head and neck, prostate and brain tumors. Three sets of plans along with a clinically approved plan (final plan) treated by either Intensity Modulated Radiation Therapy (IMRT) or Rapid Arc (RA) techniques were considered. The study yielded improved target coverage for final plans, however, no appreciable differences in doses and the complication probabilities of organs at risk were noticed. Even though all four plans showed adequate dose distributions, from dosimetric point of view, the final plan had more acceptable dose distribution. The estimated biological outcome and dose volume histogram data showed least differences between plans for IMRT when compared to RA. Our retrospective study based on 120 plans, validated the radiobiological method of plan evaluation. The tumor cure or normal tissue complication probabilities were found to be correlated with the corresponding physical dose indices.
Bayesian assessment of moving group membership: importance of models and prior knowledge
NASA Astrophysics Data System (ADS)
Lee, Jinhee; Song, Inseok
2018-04-01
Young nearby moving groups are important and useful in many fields of astronomy such as studying exoplanets, low-mass stars, and the stellar evolution of the early planetary systems over tens of millions of years, which has led to intensive searches for their members. Identification of members depends on the used models sensitively; therefore, careful examination of the models is required. In this study, we investigate the effects of the models used in moving group membership calculations based on a Bayesian framework (e.g. BANYAN II) focusing on the beta-Pictoris moving group (BPMG). Three improvements for building models are suggested: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZ and UVW. The effect of each change is investigated, and we suggest using all of these improvements simultaneously in future membership probability calculations. Using this improved MG membership calculation and the careful examination of the age, 57 bona fide members of BPMG are confirmed including 12 new members. We additionally suggest 17 highly probable members.
Random Partition Distribution Indexed by Pairwise Information
Dahl, David B.; Day, Ryan; Tsai, Jerry W.
2017-01-01
We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318
Expert Elicitations of 2100 Emission of CO2
NASA Astrophysics Data System (ADS)
Ho, Emily; Bosetti, Valentina; Budescu, David; Keller, Klaus; van Vuuren, Detlef
2017-04-01
Emission scenarios such as Shared Socioeconomic Pathways (SSPs) and Representative Concentration Pathways (RCPs) are used intensively for climate research (e.g. climate change projections) and policy analysis. While the range of these scenarios provides an indication of uncertainty, these scenarios are typically not associated with probability values. Some studies (e.g. Vuuren et al, 2007; Gillingham et al., 2015) took a different approach associating baseline emission pathways (conditionally) with probability distributions. This paper summarizes three studies where climate change experts were asked to conduct pair-wise comparisons of possible ranges of 2100 greenhouse gas emissions and rate the relative likelihood of the ranges. The elicitation was performed under two sets of assumptions: 1) a situation where no climate policies are introduced beyond the ones already in place (baseline scenario), and 2) a situation in which countries have ratified the voluntary policies in line with the long term target embedded in the 2015 Paris Agreement. These indirect relative judgments were used to construct subjective cumulative distribution functions. We show that by using a ratio scaling method that invokes relative likelihoods of scenarios, a subjective probability distribution can be derived for each expert that expresses their beliefs in the projected greenhouse gas emissions range in 2100. This method is shown to elicit stable estimates that require minimal adjustment and is relatively invariant to the partition of the domain of interest. Experts also rated the method as being easy and intuitive to use. We also report results of a study that allowed participants to choose their own ranges of greenhouse gas emissions to remove potential anchoring bias. We discuss the implications of the use of this method for facilitating comparison and communication of beliefs among diverse users of climate science research.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
NASA Astrophysics Data System (ADS)
Melott, Adrian L.; Thomas, Brian C.
2011-05-01
Cosmic radiation backgrounds are a constraint on life, and their distribution will affect the Galactic Habitable Zone. Life on Earth has developed in the context of these backgrounds, and characterizing event rates will elaborate the important influences. This in turn can be a base for comparison with other potential life-bearing planets. In this review, we estimate the intensities and rates of occurrence of many kinds of strong radiation bursts by astrophysical entities, ranging from gamma-ray bursts at cosmological distances to the Sun itself. Many of these present potential hazards to the biosphere; on timescales long compared with human history, the probability of an event intense enough to disrupt life on the land surface or in the oceans becomes large. Both photons (e.g., X-rays) and high-energy protons and other nuclei (often called "cosmic rays") constitute hazards. For either species, one of the mechanisms that comes into play even at moderate intensities is the ionization of Earth's atmosphere, which leads through chemical changes (specifically, depletion of stratospheric ozone) to increased ultraviolet B flux from the Sun reaching the surface. UVB is extremely hazardous to most life due to its strong absorption by the genetic material DNA and subsequent breaking of chemical bonds. This often leads to mutation or cell death. It is easily lethal to the microorganisms that lie at the base of the food chain in the ocean. We enumerate the known sources of radiation and characterize their intensities at Earth and rates or upper limits on these quantities. When possible, we estimate a "lethal interval," our best estimate of how often a major extinction-level event is probable given the current state of knowledge; we base these estimates on computed or expected depletion of stratospheric ozone. In general, moderate-level events are dominated by the Sun, but the far more severe infrequent events are probably dominated by gamma-ray bursts and supernovae. We note for the first time that so-called "short-hard" gamma-ray bursts are a substantial threat, comparable in magnitude to supernovae and greater than that of the higher-luminosity long bursts considered in most past work. Given their precursors, short bursts may come with little or no warning.
Melott, Adrian L; Thomas, Brian C
2011-05-01
Cosmic radiation backgrounds are a constraint on life, and their distribution will affect the Galactic Habitable Zone. Life on Earth has developed in the context of these backgrounds, and characterizing event rates will elaborate the important influences. This in turn can be a base for comparison with other potential life-bearing planets. In this review, we estimate the intensities and rates of occurrence of many kinds of strong radiation bursts by astrophysical entities, ranging from gamma-ray bursts at cosmological distances to the Sun itself. Many of these present potential hazards to the biosphere; on timescales long compared with human history, the probability of an event intense enough to disrupt life on the land surface or in the oceans becomes large. Both photons (e.g., X-rays) and high-energy protons and other nuclei (often called "cosmic rays") constitute hazards. For either species, one of the mechanisms that comes into play even at moderate intensities is the ionization of Earth's atmosphere, which leads through chemical changes (specifically, depletion of stratospheric ozone) to increased ultraviolet B flux from the Sun reaching the surface. UVB is extremely hazardous to most life due to its strong absorption by the genetic material DNA and subsequent breaking of chemical bonds. This often leads to mutation or cell death. It is easily lethal to the microorganisms that lie at the base of the food chain in the ocean. We enumerate the known sources of radiation and characterize their intensities at Earth and rates or upper limits on these quantities. When possible, we estimate a "lethal interval," our best estimate of how often a major extinction-level event is probable given the current state of knowledge; we base these estimates on computed or expected depletion of stratospheric ozone. In general, moderate-level events are dominated by the Sun, but the far more severe infrequent events are probably dominated by gamma-ray bursts and supernovae. We note for the first time that so-called "short-hard" gamma-ray bursts are a substantial threat, comparable in magnitude to supernovae and greater than that of the higher-luminosity long bursts considered in most past work. Given their precursors, short bursts may come with little or no warning.
Hanly, Patrick J; Mittelbach, Gary G; Schemske, Douglas W
2017-06-01
The nearly universal pattern that species richness increases from the poles to the equator (the latitudinal diversity gradient [LDG]) has been of intense interest since its discovery by early natural-history explorers. Among the many hypotheses proposed to explain the LDG, latitudinal variation in (1) productivity, (2) time and area available for diversification, and (3) speciation and/or extinction rates have recently received the most attention. Because tropical regions are older and were formerly more widespread, these factors are often intertwined, hampering efforts to distinguish their relative contributions to the LDG. Here we examine the global distribution of endemic lake fishes to determine how lake age, area, and latitude each affect the probability of speciation and the extent of diversification occurring within a lake. We analyzed the distribution of endemic fishes worldwide (1,933 species and subspecies from 47 families in 2,746 lakes) and find that the probability of a lake containing an endemic species and the total number of endemics per lake increase with lake age and area and decrease with latitude. Moreover, the geographic locations of endemics in 34 of 41 families are found at lower latitudes than those of nonendemics. We propose that the greater diversification of fish at low latitudes may be driven in part by ecological opportunities promoted by tropical climates and by the coevolution of species interactions.
Choi, Mi-Jung; Shin, Kwang-Soon
2014-01-01
The object of this study was to investigate the difference in physical and sensory properties of various premium ice creams. The physical properties of the various ice creams were compared by manufacturing brand. The water contents of the samples differed, with BR having the highest value at 60.5%, followed by NT and CS at 57.8% and 56.9%, respectively. The higher the water content, the lower Brix and milk fat contents in all samples. The density of the samples showed almost similar values in all samples (p>0.05). The viscosity of each ice cream had no effect on the water content in any of the brands. Before melting of the ice cream, the total color difference was dependent on the lightness, especially in the vanilla ice cream, owing to the reflection of light on the surface of the ice crystals. The CS product melted the fastest. In the sensory test, CS obtained a significantly higher sweetness intensity score but a lower score for color intensity, probably due to the smaller difference in total color, by which consumers might consider the color of CS as less intense. From this study, the cold chain system for ice cream distribution might be important to decide the physical properties although the concentration of milk fat is key factor in premium ice cream.
Studies on Physical and Sensory Properties of Premium Vanilla Ice Cream Distributed in Korean Market
Choi, Mi-Jung
2014-01-01
The object of this study was to investigate the difference in physical and sensory properties of various premium ice creams. The physical properties of the various ice creams were compared by manufacturing brand. The water contents of the samples differed, with BR having the highest value at 60.5%, followed by NT and CS at 57.8% and 56.9%, respectively. The higher the water content, the lower Brix and milk fat contents in all samples. The density of the samples showed almost similar values in all samples (p>0.05). The viscosity of each ice cream had no effect on the water content in any of the brands. Before melting of the ice cream, the total color difference was dependent on the lightness, especially in the vanilla ice cream, owing to the reflection of light on the surface of the ice crystals. The CS product melted the fastest. In the sensory test, CS obtained a significantly higher sweetness intensity score but a lower score for color intensity, probably due to the smaller difference in total color, by which consumers might consider the color of CS as less intense. From this study, the cold chain system for ice cream distribution might be important to decide the physical properties although the concentration of milk fat is key factor in premium ice cream. PMID:26761671
NASA Astrophysics Data System (ADS)
Wang, Can-Jun; Wei, Qun; Mei, Dong-Cheng
2008-03-01
The associated relaxation time T and the normalized correlation function C(s) for a tumor cell growth system subjected to color noises are investigated. Using the Novikov theorem and Fox approach, the steady probability distribution is obtained. Based on them, the expressions of T and C(s) are derived by means of projection operator method, in which the effects of the memory kernels of the correlation function are taken into account. Performing the numerical computations, it is found: (1) With the cross-correlation intensity |λ|, the additive noise intensity α and the multiplicative noise self-correlation time τ increasing, the tumor cell numbers can be restrained; And the cross-correlation time τ, the multiplicative noise intensity D can induce the tumor cell numbers increasing; However, the additive noise self-correlation time τ cannot affect the tumor cell numbers; The relaxation time T is a stochastic resonant phenomenon, and the distribution curves exhibit a single-maximum structure with D increasing. (2) The cross-correlation strength λ weakens the related activity between two states of the tumor cell numbers at different time, and enhances the stability of the tumor cell growth system in the steady state; On the contrast, τ and τ enhance the related activity between two states at different time; However, τ has no effect on the related activity between two states at different time.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Conroy, M.J.; Runge, J.P.; Barker, R.J.; Schofield, M.R.; Fonnesbeck, C.J.
2008-01-01
Many organisms are patchily distributed, with some patches occupied at high density, others at lower densities, and others not occupied. Estimation of overall abundance can be difficult and is inefficient via intensive approaches such as capture-mark-recapture (CMR) or distance sampling. We propose a two-phase sampling scheme and model in a Bayesian framework to estimate abundance for patchily distributed populations. In the first phase, occupancy is estimated by binomial detection samples taken on all selected sites, where selection may be of all sites available, or a random sample of sites. Detection can be by visual surveys, detection of sign, physical captures, or other approach. At the second phase, if a detection threshold is achieved, CMR or other intensive sampling is conducted via standard procedures (grids or webs) to estimate abundance. Detection and CMR data are then used in a joint likelihood to model probability of detection in the occupancy sample via an abundance-detection model. CMR modeling is used to estimate abundance for the abundance-detection relationship, which in turn is used to predict abundance at the remaining sites, where only detection data are collected. We present a full Bayesian modeling treatment of this problem, in which posterior inference on abundance and other parameters (detection, capture probability) is obtained under a variety of assumptions about spatial and individual sources of heterogeneity. We apply the approach to abundance estimation for two species of voles (Microtus spp.) in Montana, USA. We also use a simulation study to evaluate the frequentist properties of our procedure given known patterns in abundance and detection among sites as well as design criteria. For most population characteristics and designs considered, bias and mean-square error (MSE) were low, and coverage of true parameter values by Bayesian credibility intervals was near nominal. Our two-phase, adaptive approach allows efficient estimation of abundance of rare and patchily distributed species and is particularly appropriate when sampling in all patches is impossible, but a global estimate of abundance is required.
The global impact distribution of Near-Earth objects
NASA Astrophysics Data System (ADS)
Rumpf, Clemens; Lewis, Hugh G.; Atkinson, Peter M.
2016-02-01
Asteroids that could collide with the Earth are listed on the publicly available Near-Earth object (NEO) hazard web sites maintained by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The impact probability distribution of 69 potentially threatening NEOs from these lists that produce 261 dynamically distinct impact instances, or Virtual Impactors (VIs), were calculated using the Asteroid Risk Mitigation and Optimization Research (ARMOR) tool in conjunction with OrbFit. ARMOR projected the impact probability of each VI onto the surface of the Earth as a spatial probability distribution. The projection considers orbit solution accuracy and the global impact probability. The method of ARMOR is introduced and the tool is validated against two asteroid-Earth collision cases with objects 2008 TC3 and 2014 AA. In the analysis, the natural distribution of impact corridors is contrasted against the impact probability distribution to evaluate the distributions' conformity with the uniform impact distribution assumption. The distribution of impact corridors is based on the NEO population and orbital mechanics. The analysis shows that the distribution of impact corridors matches the common assumption of uniform impact distribution and the result extends the evidence base for the uniform assumption from qualitative analysis of historic impact events into the future in a quantitative way. This finding is confirmed in a parallel analysis of impact points belonging to a synthetic population of 10,006 VIs. Taking into account the impact probabilities introduced significant variation into the results and the impact probability distribution, consequently, deviates markedly from uniformity. The concept of impact probabilities is a product of the asteroid observation and orbit determination technique and, thus, represents a man-made component that is largely disconnected from natural processes. It is important to consider impact probabilities because such information represents the best estimate of where an impact might occur.
On the derivation of the areal reduction factor of storms
NASA Astrophysics Data System (ADS)
Bacchi, Baldassare; Ranzi, Roberto
A stochastic derivation of the areal reduction factor (ARF) of the storm intensity is presented: it is based on the analysis of the crossing properties of the rainfall process aggregated in space and time. As a working hypothesis, the number of crossings of high rainfall intensity levels is assumed to be Poisson-distributed and a hyperbolic tail of the probability of exceedances of rainfall intensity has been adopted. These hypotheses are supported by the analysis of radar maps during an intense storm event which occurred in Northern Italy. The reduction factor derived from this analysis shows a power-law decay with respect to the area of integration and the duration of the storm. The areal reduction results as a function of the storm duration and of its frequency. A weak, but significant decrease of the areal reduction factor with respect to the return period is shown by the functions derived, and this result is consistent with that of some recent studies on this topic. The results derived, although preliminary, may find useful applications for the definition of the design storm in urban catchments of a size greater than some square kilometres and with duration of some hours.
Improved Membership Probability for Moving Groups: Bayesian and Machine Learning Approaches
NASA Astrophysics Data System (ADS)
Lee, Jinhee; Song, Inseok
2018-01-01
Gravitationally unbound loose stellar associations (i.e., young nearby moving groups: moving groups hereafter) have been intensively explored because they are important in planet and disk formation studies, exoplanet imaging, and age calibration. Among the many efforts devoted to the search for moving group members, a Bayesian approach (e.g.,using the code BANYAN) has become popular recently because of the many advantages it offers. However, the resultant membership probability needs to be carefully adopted because of its sensitive dependence on input models. In this study, we have developed an improved membership calculation tool focusing on the beta-Pic moving group. We made three improvements for building models used in BANYAN II: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZUVW. Our improved tool can change membership probability up to 70%. Membership probability is critical and must be better defined. For example, our code identifies only one third of the candidate members in SIMBAD that are believed to be kinematically associated with beta-Pic moving group.Additionally, we performed cluster analysis of young nearby stars using an unsupervised machine learning approach. As more moving groups and their members are identified, the complexity and ambiguity in moving group configuration has been increased. To clarify this issue, we analyzed ~4,000 X-ray bright young stellar candidates. Here, we present the preliminary results. By re-identifying moving groups with the least human intervention, we expect to understand the composition of the solar neighborhood. Moreover better defined moving group membership will help us understand star formation and evolution in relatively low density environments; especially for the low-mass stars which will be identified in the coming Gaia release.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anand, L. F. M.; Gudennavar, S. B., E-mail: shivappa.b.gudennavar@christuniversity.in; Bubbly, S. G.
The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K{sub β} to K{sub α} intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak {sup 137}Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others’ work, establishing a good agreement.
Testing the anisotropy in the angular distribution of Fermi/GBM gamma-ray bursts
NASA Astrophysics Data System (ADS)
Tarnopolski, M.
2017-12-01
Gamma-ray bursts (GRBs) were confirmed to be of extragalactic origin due to their isotropic angular distribution, combined with the fact that they exhibited an intensity distribution that deviated strongly from the -3/2 power law. This finding was later confirmed with the first redshift, equal to at least z = 0.835, measured for GRB970508. Despite this result, the data from CGRO/BATSE and Swift/BAT indicate that long GRBs are indeed distributed isotropically, but the distribution of short GRBs is anisotropic. Fermi/GBM has detected 1669 GRBs up to date, and their sky distribution is examined in this paper. A number of statistical tests are applied: nearest neighbour analysis, fractal dimension, dipole and quadrupole moments of the distribution function decomposed into spherical harmonics, binomial test and the two-point angular correlation function. Monte Carlo benchmark testing of each test is performed in order to evaluate its reliability. It is found that short GRBs are distributed anisotropically in the sky, and long ones have an isotropic distribution. The probability that these results are not a chance occurrence is equal to at least 99.98 per cent and 30.68 per cent for short and long GRBs, respectively. The cosmological context of this finding and its relation to large-scale structures is discussed.
Coastal flooding hazard assessment on potentially vulnerable coastal sectors at Varna regional coast
NASA Astrophysics Data System (ADS)
Eftimova, Petya; Valchev, Nikolay; Andreeva, Nataliya
2017-04-01
Storm induced flooding is one of the most significant threats that the coastal communities face. In the light of the climate change it is expected to gain even more importance. Therefore, the adequate assessment of this hazard could increase the capability of mitigation of environmental, social, and economic impacts. The study was accomplished in the frames of the Coastal Risk Assessment Framework (CRAF) developed within the FP7 RISC-KIT Project (Resilience-Increasing Strategies for Coasts - toolkit). The hazard assessment was applied on three potentially vulnerable coastal sectors located at the regional coast of Varna, Bulgarian Black Sea coast. The potential "hotspot" candidates were selected during the initial phase of CRAF which evaluated the coastal risks at regional level. The area of interest comprises different coastal types - from natural beaches and rocky cliffs to man modified environments presented by coastal and port defense structures such as the Varna Port breakwater, groynes, jetties and beaches formed by the presence of coastal structures. The assessment of coastal flooding was done using combination of models -XBeach model and LISFLOOD inundation model applied consecutively. The XBeach model was employed to calculate the hazard intensities at the coast up to the berm crest, while LISFLOOD model was used to calculate the intensity and extent of flooding in the hinterland. At the first stage, 75 extreme storm events were simulated using XBeach model run in "non-hydrostatic" mode to obtain series of flood depth, depth-velocity and overtopping discharges at the predefined coastal cross-shore transects. Extreme value analysis was applied to the calculated hazard parameters series in order to determine their probability distribution functions. This is so called response approach, which is focused on the onshore impact rather than on the deep water boundary conditions. It allows calculation of the hazard extremes probability distribution induced by a variety of combinations of waves and surges. The considered return periods were 20, 50 and 100 years. Subsequently, the overtopping volumes corresponding to preferred return periods were fed into LISFLOOD model to calculate the intensity and extent of the hinterland flooding. For the beaches with fast-rising slopes backed by cliffs a combination of XBeach and LISFLOOD output was applied in order to properly map the flood depth and depth-velocity spatial distribution.
Composition effect in luminescence properties of Y(NbxTa1-x)O4 mixed crystals
NASA Astrophysics Data System (ADS)
Spassky, D.; Vasil'ev, A.; Vielhauer, S.; Sidletskiy, O.; Voloshyna, O.; Belsky, A.
2018-06-01
The luminescence properties of Y(NbxTa1-x)O4 mixed crystals were studied. Local structure modifications arise due to the inhomogeneous distribution of the substitutional ions in the mixed crystal and can be traced using the luminescence spectroscopy. In particular, it is shown that the intensity of exciton emission under interband VUV excitation increases for intermediate values of x with the maximum at x = 0.4 relatively to the intensity observed in the constituents of the mixed crystals - YTaO4 and YNbO4. From the luminescence excitation spectra it follows that the probability of exciton creation from the separated e-h pairs increases for intermediate values of x as well. Using numerical simulation it is shown that the effect is connected with the variation of the thermalization length of hot electrons and holes, which is minimal at x = 0.4.
New Insights into the Estimation of Extreme Geomagnetic Storm Occurrences
NASA Astrophysics Data System (ADS)
Ruffenach, Alexis; Winter, Hugo; Lavraud, Benoit; Bernardara, Pietro
2017-04-01
Space weather events such as intense geomagnetic storms are major disturbances of the near-Earth environment that may lead to serious impacts on our modern society. As such, it is of great importance to estimate their probability, and in particular that of extreme events. One approach largely used in statistical sciences for extreme events probability estimates is Extreme Value Analysis (EVA). Using this rigorous statistical framework, estimations of the occurrence of extreme geomagnetic storms are performed here based on the most relevant global parameters related to geomagnetic storms, such as ground parameters (e.g. geomagnetic Dst and aa indexes), and space parameters related to the characteristics of Coronal Mass Ejections (CME) (velocity, southward magnetic field component, electric field). Using our fitted model, we estimate the annual probability of a Carrington-type event (Dst = -850nT) to be on the order of 10-3, with a lower limit of the uncertainties on the return period of ˜500 years. Our estimate is significantly higher than that of most past studies, which typically had a return period of a few 100 years at maximum. Thus precautions are required when extrapolating intense values. Currently, the complexity of the processes and the length of available data inevitably leads to significant uncertainties in return period estimates for the occurrence of extreme geomagnetic storms. However, our application of extreme value models for extrapolating into the tail of the distribution provides a mathematically justified framework for the estimation of extreme return periods, thereby enabling the determination of more accurate estimates and reduced associated uncertainties.
Attribution of Extreme Rainfall Events in the South of France Using EURO-CORDEX Simulations
NASA Astrophysics Data System (ADS)
Luu, L. N.; Vautard, R.; Yiou, P.
2017-12-01
The Mediterranean region regularly undergoes episodes of intense precipitation in the fall season that exceed 300mm a day. This study focuses on the role of climate change on the dynamics of the events that occur in the South of France. We used an ensemble of 10 EURO-CORDEX model simulations with two horizontal resolutions (EUR-11: 0.11° and EUR-44: 0.44°) for the attribution of extreme rainfall in the fall in the Cevennes mountain range (South of France). The biases of the simulations were corrected with simple scaling adjustment and a quantile correction (CDFt). This produces five datasets including EUR-44 and EUR-11 with and without scaling adjustment and CDFt-EUR-11, on which we test the impact of resolution and bias correction on the extremes. Those datasets, after pooling all of models together, are fitted by a stationary Generalized Extreme Value distribution for several periods to estimate a climate change signal in the tail of distribution of extreme rainfall in the Cévenne region. Those changes are then interpreted by a scaling model that links extreme rainfall with mean and maximum daily temperature. The results show that higher-resolution simulations with bias adjustment provide a robust and confident increase of intensity and likelihood of occurrence of autumn extreme rainfall in the area in current climate in comparison with historical climate. The probability (exceedance probability) of 1-in-1000-year event in historical climate may increase by a factor of 1.8 under current climate with a confident interval of 0.4 to 5.3 following the CDFt bias-adjusted EUR-11. The change of magnitude appears to follow the Clausius-Clapeyron relation that indicates a 7% increase in rainfall per 1oC increase in temperature.
Gordaliza, P M; Muñoz-Barrutia, A; Via, L E; Sharpe, S; Desco, M; Vaquero, J J
2018-05-29
Computed tomography (CT) images enable capturing specific manifestations of tuberculosis (TB) that are undetectable using common diagnostic tests, which suffer from limited specificity. In this study, we aimed to automatically quantify the burden of Mycobacterium tuberculosis (Mtb) using biomarkers extracted from x-ray CT images. Nine macaques were aerosol-infected with Mtb and treated with various antibiotic cocktails. Chest CT scans were acquired in all animals at specific times independently of disease progression. First, a fully automatic segmentation of the healthy lungs from the acquired chest CT volumes was performed and air-like structures were extracted. Next, unsegmented pulmonary regions corresponding to damaged parenchymal tissue and TB lesions were included. CT biomarkers were extracted by classification of the probability distribution of the intensity of the segmented images into three tissue types: (1) Healthy tissue, parenchyma free from infection; (2) soft diseased tissue, and (3) hard diseased tissue. The probability distribution of tissue intensities was assumed to follow a Gaussian mixture model. The thresholds identifying each region were automatically computed using an expectation-maximization algorithm. The estimated longitudinal course of TB infection shows that subjects that have followed the same antibiotic treatment present a similar response (relative change in the diseased volume) with respect to baseline. More interestingly, the correlation between the diseased volume (soft tissue + hard tissue), which was manually delineated by an expert, and the automatically extracted volume with the proposed method was very strong (R 2 ≈ 0.8). We present a methodology that is suitable for automatic extraction of a radiological biomarker from CT images for TB disease burden. The method could be used to describe the longitudinal evolution of Mtb infection in a clinical trial devoted to the design of new drugs.
NASA Astrophysics Data System (ADS)
Ali Saif, M.; Gade, Prashant M.
2009-03-01
Pareto law, which states that wealth distribution in societies has a power-law tail, has been the subject of intensive investigations in the statistical physics community. Several models have been employed to explain this behavior. However, most of the agent based models assume the conservation of number of agents and wealth. Both these assumptions are unrealistic. In this paper, we study the limiting wealth distribution when one or both of these assumptions are not valid. Given the universality of the law, we have tried to study the wealth distribution from the asset exchange models point of view. We consider models in which (a) new agents enter the market at a constant rate (b) richer agents fragment with higher probability introducing newer agents in the system (c) both fragmentation and entry of new agents is taking place. While models (a) and (c) do not conserve total wealth or number of agents, model (b) conserves total wealth. All these models lead to a power-law tail in the wealth distribution pointing to the possibility that more generalized asset exchange models could help us to explain the emergence of a power-law tail in wealth distribution.
NASA Astrophysics Data System (ADS)
Peng, Chi; Wang, Meie; Chen, Weiping
2016-11-01
Spatial statistical methods including Cokriging interpolation, Morans I analysis, and geographically weighted regression (GWR) were used for studying the spatial characteristics of polycyclic aromatic hydrocarbon (PAH) accumulation in urban, suburban, and rural soils of Beijing. The concentrations of PAHs decreased spatially as the level of urbanization decreased. Generally, PAHs in soil showed two spatial patterns on the regional scale: (1) regional baseline depositions with a radius of 16.5 km related to the level of urbanization and (2) isolated pockets of soil contaminated with PAHs were found up to around 3.5 km from industrial point sources. In the urban areas, soil PAHs showed high spatial heterogeneity on the block scale, which was probably related to vegetation cover, land use, and physical soil disturbance. The distribution of total PAHs in urban blocks was unrelated to the indicators of the intensity of anthropogenic activity, namely population density, light intensity at night, and road density, but was significantly related to the same indicators in the suburban and rural areas. The moving averages of molecular ratios suggested that PAHs in the suburban and rural soils were a mix of local emissions and diffusion from urban areas.
Ogunnaike, Babatunde A; Gelmi, Claudio A; Edwards, Jeremy S
2010-05-21
Gene expression studies generate large quantities of data with the defining characteristic that the number of genes (whose expression profiles are to be determined) exceed the number of available replicates by several orders of magnitude. Standard spot-by-spot analysis still seeks to extract useful information for each gene on the basis of the number of available replicates, and thus plays to the weakness of microarrays. On the other hand, because of the data volume, treating the entire data set as an ensemble, and developing theoretical distributions for these ensembles provides a framework that plays instead to the strength of microarrays. We present theoretical results that under reasonable assumptions, the distribution of microarray intensities follows the Gamma model, with the biological interpretations of the model parameters emerging naturally. We subsequently establish that for each microarray data set, the fractional intensities can be represented as a mixture of Beta densities, and develop a procedure for using these results to draw statistical inference regarding differential gene expression. We illustrate the results with experimental data from gene expression studies on Deinococcus radiodurans following DNA damage using cDNA microarrays. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan
2004-01-01
Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC)/Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years of CGLSS data into a climatological database of all strikes recorded within a 20-mile radius of space shuttle launch platform LP39A, which serves as a convenient central point. The period of record (POR) for the database runs from January 1, 1993 to December 31, 2002. Histograms and cumulative probability curves are produced to determine the distribution of occurrence rates for the spectrum of strike intensities (given in kA). Further analysis of the database provides a description of both seasonal and interannual variations in the lightning distribution.
Nonadditive entropies yield probability distributions with biases not warranted by the data.
Pressé, Steve; Ghosh, Kingshuk; Lee, Julian; Dill, Ken A
2013-11-01
Different quantities that go by the name of entropy are used in variational principles to infer probability distributions from limited data. Shore and Johnson showed that maximizing the Boltzmann-Gibbs form of the entropy ensures that probability distributions inferred satisfy the multiplication rule of probability for independent events in the absence of data coupling such events. Other types of entropies that violate the Shore and Johnson axioms, including nonadditive entropies such as the Tsallis entropy, violate this basic consistency requirement. Here we use the axiomatic framework of Shore and Johnson to show how such nonadditive entropy functions generate biases in probability distributions that are not warranted by the underlying data.
Glatz, Andreas; Valdés Hernández, Maria C.; Kiker, Alexander J.; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.
2013-01-01
Multifocal T2*-weighted (T2*w) hypointensities in the basal ganglia, which are believed to arise predominantly from mineralized small vessels and perivascular spaces, have been proposed as a biomarker for cerebral small vessel disease. This study provides baseline data on their appearance on conventional structural MRI for improving and automating current manual segmentation methods. Using a published thresholding method, multifocal T2*w hypointensities were manually segmented from whole brain T2*w volumes acquired from 98 community-dwelling subjects in their early 70s. Connected component analysis was used to derive the average T2*w hypointensity count and load per basal ganglia nucleus, as well as the morphology of their connected components, while nonlinear spatial probability mapping yielded their spatial distribution. T1-weighted (T1w), T2-weighted (T2w) and T2*w intensity distributions of basal ganglia T2*w hypointensities and their appearance on T1w and T2w MRI were investigated to gain further insights into the underlying tissue composition. In 75/98 subjects, on average, 3 T2*w hypointensities with a median total volume per intracranial volume of 50.3 ppm were located in and around the globus pallidus. Individual hypointensities appeared smooth and spherical with a median volume of 12 mm3 and median in-plane area of 4 mm2. Spatial probability maps suggested an association between T2*w hypointensities and the point of entry of lenticulostriate arterioles into the brain parenchyma. T1w and T2w and especially the T2*w intensity distributions of these hypointensities, which were negatively skewed, were generally not normally distributed indicating an underlying inhomogeneous tissue structure. Globus pallidus T2*w hypointensities tended to appear hypo- and isointense on T1w and T2w MRI, whereas those from other structures appeared iso- and hypointense. This pattern could be explained by an increased mineralization of the globus pallidus. In conclusion, the characteristic spatial distribution and appearance of multifocal basal ganglia T2*w hypointensities in our elderly cohort on structural MRI appear to support the suggested association with mineralized proximal lenticulostriate arterioles and perivascular spaces. PMID:23769704
ProbOnto: ontology and knowledge base of probability distributions.
Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala
2016-09-01
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta
2017-02-15
Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.
Electron-phonon relaxation and excited electron distribution in gallium nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukov, V. P.; Donostia International Physics Center; Tyuterev, V. G., E-mail: valtyut00@mail.ru
2016-08-28
We develop a theory of energy relaxation in semiconductors and insulators highly excited by the long-acting external irradiation. We derive the equation for the non-equilibrium distribution function of excited electrons. The solution for this function breaks up into the sum of two contributions. The low-energy contribution is concentrated in a narrow range near the bottom of the conduction band. It has the typical form of a Fermi distribution with an effective temperature and chemical potential. The effective temperature and chemical potential in this low-energy term are determined by the intensity of carriers' generation, the speed of electron-phonon relaxation, rates ofmore » inter-band recombination, and electron capture on the defects. In addition, there is a substantial high-energy correction. This high-energy “tail” largely covers the conduction band. The shape of the high-energy “tail” strongly depends on the rate of electron-phonon relaxation but does not depend on the rates of recombination and trapping. We apply the theory to the calculation of a non-equilibrium distribution of electrons in an irradiated GaN. Probabilities of optical excitations from the valence to conduction band and electron-phonon coupling probabilities in GaN were calculated by the density functional perturbation theory. Our calculation of both parts of distribution function in gallium nitride shows that when the speed of the electron-phonon scattering is comparable with the rate of recombination and trapping then the contribution of the non-Fermi “tail” is comparable with that of the low-energy Fermi-like component. So the high-energy contribution can essentially affect the charge transport in the irradiated and highly doped semiconductors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
Scan statistics with local vote for target detection in distributed system
NASA Astrophysics Data System (ADS)
Luo, Junhai; Wu, Qi
2017-12-01
Target detection has occupied a pivotal position in distributed system. Scan statistics, as one of the most efficient detection methods, has been applied to a variety of anomaly detection problems and significantly improves the probability of detection. However, scan statistics cannot achieve the expected performance when the noise intensity is strong, or the signal emitted by the target is weak. The local vote algorithm can also achieve higher target detection rate. After the local vote, the counting rule is always adopted for decision fusion. The counting rule does not use the information about the contiguity of sensors but takes all sensors' data into consideration, which makes the result undesirable. In this paper, we propose a scan statistics with local vote (SSLV) method. This method combines scan statistics with local vote decision. Before scan statistics, each sensor executes local vote decision according to the data of its neighbors and its own. By combining the advantages of both, our method can obtain higher detection rate in low signal-to-noise ratio environment than the scan statistics. After the local vote decision, the distribution of sensors which have detected the target becomes more intensive. To make full use of local vote decision, we introduce a variable-step-parameter for the SSLV. It significantly shortens the scan period especially when the target is absent. Analysis and simulations are presented to demonstrate the performance of our method.
Modelling stock order flows with non-homogeneous intensities from high-frequency data
NASA Astrophysics Data System (ADS)
Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.
2013-10-01
A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).
Application of a time probabilistic approach to seismic landslide hazard estimates in Iran
NASA Astrophysics Data System (ADS)
Rajabi, A. M.; Del Gaudio, V.; Capolongo, D.; Khamehchiyan, M.; Mahdavifar, M. R.
2009-04-01
Iran is a country located in a tectonic active belt and is prone to earthquake and related phenomena. In the recent years, several earthquakes caused many fatalities and damages to facilities, e.g. the Manjil (1990), Avaj (2002), Bam (2003) and Firuzabad-e-Kojur (2004) earthquakes. These earthquakes generated many landslides. For instance, catastrophic landslides triggered by the Manjil Earthquake (Ms = 7.7) in 1990 buried the village of Fatalak, killed more than 130 peoples and cut many important road and other lifelines, resulting in major economic disruption. In general, earthquakes in Iran have been concentrated in two major zones with different seismicity characteristics: one is the region of Alborz and Central Iran and the other is the Zagros Orogenic Belt. Understanding where seismically induced landslides are most likely to occur is crucial in reducing property damage and loss of life in future earthquakes. For this purpose a time probabilistic approach for earthquake-induced landslide hazard at regional scale, proposed by Del Gaudio et al. (2003), has been applied to the whole Iranian territory to provide the basis of hazard estimates. This method consists in evaluating the recurrence of seismically induced slope failure conditions inferred from the Newmark's model. First, by adopting Arias Intensity to quantify seismic shaking and using different Arias attenuation relations for Alborz - Central Iran and Zagros regions, well-established methods of seismic hazard assessment, based on the Cornell (1968) method, were employed to obtain the occurrence probabilities for different levels of seismic shaking in a time interval of interest (50 year). Then, following Jibson (1998), empirical formulae specifically developed for Alborz - Central Iran and Zagros, were used to represent, according to the Newmark's model, the relation linking Newmark's displacement Dn to Arias intensity Ia and to slope critical acceleration ac. These formulae were employed to evaluate the slope critical acceleration (Ac)x for which a prefixed probability exists that seismic shaking would result in a Dn value equal to a threshold x whose exceedence would cause landslide triggering. The obtained ac values represent the minimum slope resistance required to keep the probability of seismic-landslide triggering within the prefixed value. In particular we calculated the spatial distribution of (Ac)x for x thresholds of 10 and 2 cm in order to represent triggering conditions for coherent slides (e.g., slumps, block slides, slow earth flows) and disrupted slides (e.g., rock falls, rock slides, rock avalanches), respectively. Then we produced a probabilistic national map that shows the spatial distribution of (Ac)10 and (Ac)2, for a 10% probability of exceedence in 50 year, which is a significant level of hazard equal to that commonly used for building codes. The spatial distribution of the calculated (Ac)xvalues can be compared with the in situ actual ac values of specific slopes to estimate whether these slopes have a significant probability of failing under seismic action in the future. As example of possible application of this kind of time probabilistic map to hazard estimates, we compared the values obtained for the Manjil region with a GIS map providing spatial distribution of estimated ac values in the same region. The spatial distribution of slopes characterized by ac < (Ac)10 was then compared with the spatial distribution of the major landslides of coherent type triggered by the Manjil earthquake. This comparison provides indications on potential, problems and limits of the experimented approach for the study area. References Cornell, C.A., 1968: Engineering seismic risk analysis, Bull. Seism. Soc. Am., 58, 1583-1606. Del Gaudio V., Wasowski J., & Pierri P., 2003: An approach to time probabilistic evaluation of seismically-induced landslide hazard. Bull Seism. Soc. Am., 93, 557-569. Jibson, R.W., E.L. Harp and J.A. Michael, 1998: A method for producing digital probabilistic seismic landslide hazard maps: an example from the Los Angeles, California, area, U.S. Geological Survey Open-File Report 98-113, Golden, Colorado, 17 pp.
Anomalous, non-Gaussian tracer diffusion in crowded two-dimensional environments
NASA Astrophysics Data System (ADS)
Ghosh, Surya K.; Cherstvy, Andrey G.; Grebenkov, Denis S.; Metzler, Ralf
2016-01-01
A topic of intense current investigation pursues the question of how the highly crowded environment of biological cells affects the dynamic properties of passively diffusing particles. Motivated by recent experiments we report results of extensive simulations of the motion of a finite sized tracer particle in a heterogeneously crowded environment made up of quenched distributions of monodisperse crowders of varying sizes in finite circular two-dimensional domains. For given spatial distributions of monodisperse crowders we demonstrate how anomalous diffusion with strongly non-Gaussian features arises in this model system. We investigate both biologically relevant situations of particles released either at the surface of an inner domain or at the outer boundary, exhibiting distinctly different features of the observed anomalous diffusion for heterogeneous distributions of crowders. Specifically we reveal an asymmetric spreading of tracers even at moderate crowding. In addition to the mean squared displacement (MSD) and local diffusion exponent we investigate the magnitude and the amplitude scatter of the time averaged MSD of individual tracer trajectories, the non-Gaussianity parameter, and the van Hove correlation function. We also quantify how the average tracer diffusivity varies with the position in the domain with a heterogeneous radial distribution of crowders and examine the behaviour of the survival probability and the dynamics of the tracer survival probability. Inter alia, the systems we investigate are related to the passive transport of lipid molecules and proteins in two-dimensional crowded membranes or the motion in colloidal solutions or emulsions in effectively two-dimensional geometries, as well as inside supercrowded, surface adhered cells.
Incorporating Skew into RMS Surface Roughness Probability Distribution
NASA Technical Reports Server (NTRS)
Stahl, Mark T.; Stahl, H. Philip.
2013-01-01
The standard treatment of RMS surface roughness data is the application of a Gaussian probability distribution. This handling of surface roughness ignores the skew present in the surface and overestimates the most probable RMS of the surface, the mode. Using experimental data we confirm the Gaussian distribution overestimates the mode and application of an asymmetric distribution provides a better fit. Implementing the proposed asymmetric distribution into the optical manufacturing process would reduce the polishing time required to meet surface roughness specifications.
NASA Technical Reports Server (NTRS)
Klein, L.
1972-01-01
Emission and absorption spectra of water vapor plasmas generated in a wall-stabilized arc at atmospheric pressure and 4 current, and at 0.03 atm and 15 to 50 A, were measured at high spatial and spectral resolution. The gas temperature was determined from the shape of Doppler-broadened rotational lines of OH. The observed nonequilibrium population distributions over the energy levels of atoms are interpreted in terms of a theoretical state model for diffusion-controlled arc plasmas. Excellent correlation is achieved between measured and predicted occupation of hydrogen energy levels. It is shown that the population distribution over the nonpredissociating rotational-vibrational levels of the A 2 Sigma state of OH is close to an equilibrium distribution at the gas temperature, although the total density of this state is much higher than its equilibrium density. The reduced intensities of the rotational lines originating in these levels yielded Boltzmann plots that were strictly linear.
NASA Astrophysics Data System (ADS)
Wilkinson, S. J.; Hukins, D. W. L.
1999-08-01
Elastic scattering of X-rays can provide the following information on the fibrous protein collagen: its molecular structure, the axial arrangement of rod-like collagen molecules in a fibril, the lateral arrangement of molecules within a fibril, and the orientation of fibrils within a biological tissue. The first part of the paper reviews the principles involved in deducing this information. The second part describes a new computer program for measuring the equatorial intensity distribution, that provides information on the lateral arrangement of molecules within a fibril, and the angular distribution of the equatorial peaks that provides information on the orientation of fibrils. Orientation of fibrils within a tissue is quantified by the orientation distribution function, g( φ), which represents the probability of finding a fibril oriented between φ and φ+ δφ. The application of the program is illustrated by measurement of g( φ) for the collagen fibrils in demineralised cortical bone from cow tibia.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
NASA Astrophysics Data System (ADS)
Kagoshima, Yasushi; Miyagawa, Takamasa; Kagawa, Saki; Takeda, Shingo; Takano, Hidekazu
2017-08-01
The intensity distribution in phase space of an X-ray synchrotron radiation beamline was measured using a pinhole camera method, in order to verify astigmatism compensation by a Fresnel zone plate focusing optical system. The beamline is equipped with a silicon double crystal monochromator. The beam size and divergence at an arbitrary distance were estimated. It was found that the virtual source point was largely different between the vertical and horizontal directions, which is probably caused by thermal distortion of the monochromator crystal. The result is consistent with our astigmatism compensation by inclining a Fresnel zone plate.
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
Integrated-Circuit Pseudorandom-Number Generator
NASA Technical Reports Server (NTRS)
Steelman, James E.; Beasley, Jeff; Aragon, Michael; Ramirez, Francisco; Summers, Kenneth L.; Knoebel, Arthur
1992-01-01
Integrated circuit produces 8-bit pseudorandom numbers from specified probability distribution, at rate of 10 MHz. Use of Boolean logic, circuit implements pseudorandom-number-generating algorithm. Circuit includes eight 12-bit pseudorandom-number generators, outputs are uniformly distributed. 8-bit pseudorandom numbers satisfying specified nonuniform probability distribution are generated by processing uniformly distributed outputs of eight 12-bit pseudorandom-number generators through "pipeline" of D flip-flops, comparators, and memories implementing conditional probabilities on zeros and ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.
2014-03-01
Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different treatment modalities.« less
NASA Astrophysics Data System (ADS)
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to 2008, while the intensity of such flow extremes is comparatively increasing especially for the higher return levels.
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Positive phase space distributions and uncertainty relations
NASA Technical Reports Server (NTRS)
Kruger, Jan
1993-01-01
In contrast to a widespread belief, Wigner's theorem allows the construction of true joint probabilities in phase space for distributions describing the object system as well as for distributions depending on the measurement apparatus. The fundamental role of Heisenberg's uncertainty relations in Schroedinger form (including correlations) is pointed out for these two possible interpretations of joint probability distributions. Hence, in order that a multivariate normal probability distribution in phase space may correspond to a Wigner distribution of a pure or a mixed state, it is necessary and sufficient that Heisenberg's uncertainty relation in Schroedinger form should be satisfied.
Climatic variables are associated with the prevalence of biliary trematodes in otters.
Sherrard-Smith, Ellie; Chadwick, Elizabeth A; Cable, Joanne
2013-08-01
Parasites with complex life cycles are expected to be disproportionately affected by climate change. Knowledge of current associations with weather and host-parasite interactions is therefore essential for the inference of future distributions. The Eurasian otter, Lutra lutra, is exposed to a range of parasites due to its large home range and use of terrestrial, freshwater and marine habitats. As such, it can act as a sentinel species for generalist parasites. Here we consider two biliary parasites recently reported in the United Kingdom, Pseudamphistomum truncatum and Metorchis albidus (Trematoda, Opisthorchiidae), and ask whether there are associations between abiotic factors (season, temperature, rainfall and the North Atlantic Oscillation) and the prevalence and intensities of these parasites in otters (n = 586). To control for biotic interactions we first examined whether particular sub-groups of the otter population (grouped by sex, age-class and condition) are more prone to infection and whether any damage is associated with the presence of these parasites. Even though mean intensities of the smaller trematode, P. truncatum (28.3 worms/host), were much higher than M. albidus (4.1), both parasite species had similar impacts on the otter. The distributions of parasites on host sexes were similar, but males suffered greater damage and regardless of sex, parasite intensity increased in older hosts. The probability of infection with either parasite was negatively associated with ground frost, minimum temperatures and rainfall, but was positively associated with warm long-term average temperatures. Although it is widely accepted that multiple variables influence parasite distributions, to our knowledge this is one of only a few studies to examine the combined impact of biotic and abiotic variables on parasites with complex life cycles within their wild definitive host. Identifying such associations can give greater accuracy to predictions concerning the distribution and spread of trematodes with future climate change. Copyright © 2013 Australian Society for Parasitology Inc. Published by Elsevier Ltd. All rights reserved.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picconi, David; Grebenshchikov, Sergy Yu., E-mail: Sergy.Grebenshchikov@ch.tum.de
Photodissociation of ozone in the near UV is studied quantum mechanically in two excited electronic states coupled at a conical intersection located outside the Franck-Condon zone. The calculations, performed using recent ab initio PESs, provide an accurate description of the photodissociation dynamics across the Hartley/Huggins absorption bands. The observed photofragment distributions are reproduced in the two electronic dissociation channels. The room temperature absorption spectrum, constructed as a Boltzmann average of many absorption spectra of rotationally excited parent ozone, agrees with experiment in terms of widths and intensities of diffuse structures. The exit channel conical intersection contributes to the coherent broadeningmore » of the absorption spectrum and directly affects the product vibrational and translational distributions. The photon energy dependences of these distributions are strikingly different for fragments created along the adiabatic and the diabatic paths through the intersection. They can be used to reverse engineer the most probable geometry of the non-adiabatic transition. The angular distributions, quantified in terms of the anisotropy parameter β, are substantially different in the two channels due to a strong anticorrelation between β and the rotational angular momentum of the fragment O{sub 2}.« less
Effect of Cisplatin on Parotid Gland Function in Concomitant Radiochemotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hey, Jeremias; Setz, Juergen; Gerlach, Reinhard
2009-12-01
Purpose: To determine the influence of concomitant radiochemotherapy with cisplatin on parotid gland tissue complication probability. Methods and Materials: Patients treated with either radiotherapy (n = 61) or concomitant radiochemotherapy with cisplatin (n = 36) for head-and-neck cancer were prospectively evaluated. The dose and volume distributions of the parotid glands were noted in dose-volume histograms. Stimulated salivary flow rates were measured before, during the 2nd and 6th weeks and at 4 weeks and 6 months after the treatment. The data were fit using the normal tissue complication probability model of Lyman. Complication was defined as a reduction of the salivarymore » flow rate to less than 25% of the pretreatment flow rate. Results: The normal tissue complication probability model parameter TD{sub 50} (the dose leading to a complication probability of 50%) was found to be 32.2 Gy at 4 weeks and 32.1 Gy at 6 months for concomitant radiochemotherapy and 41.1 Gy at 4 weeks and 39.6 Gy at 6 months for radiotherapy. The tolerated dose for concomitant radiochemotherapy was at least 7 to 8 Gy lower than for radiotherapy alone at TD{sub 50}. Conclusions: In this study, the concomitant radiochemotherapy tended to cause a higher probability of parotid gland tissue damage. Advanced radiotherapy planning approaches such as intensity-modulated radiotherapy may be partiticularly important for parotid sparing in radiochemotherapy because of cisplatin-related increased radiosensitivity of glands.« less
A novel method for the evaluation of uncertainty in dose-volume histogram computation.
Henríquez, Francisco Cutanda; Castrillón, Silvia Vargas
2008-03-15
Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.
NASA Astrophysics Data System (ADS)
Bastida, F.; Brime, C.; García-López, S.; Sarmiento, G. N.
The palaeotemperature distribution in the transition from diagenesis to metamorphism in the western nappes of the Cantabrian Zone (Somiedo, La Sobia and Aramo Units) are analysed by conodont colour alteration index (CAI) and illite crystallinity (IC). Structural and stratigraphic control in distribution of CAI and IC values is observed. Both CAI and IC value distributions show that anchizonal conditions are reached in the lower part of the Somiedo Unit. A disruption of the thermal trend by basal thrusts is evidenced by CAI and IC values. There is an apparent discrepancy between the IC and CAI values in Carboniferous rocks of the Aramo Unit; the IC has mainly anchizonal values, whereas the CAI has diagenetic values. Discrepant IC values are explained as a feature inherited from the source area. In the Carboniferous rocks of the La Sobia Unit, both IC and CAI indicate diagenetic conditions. The anchimetamorphism predated completion of emplacement of the major nappes; it probably developed previously and/or during the early stages of motion of the units. Temperature probably decreased when the metamorphosed zones of the sheets rose along ramps and were intensely eroded. In the context of the Iberian Variscan belt, influence of tectonic factors on the metamorphism is greater in the internal parts, where the strain and cleavage are always present, than in the external parts (Cantabrian Zone), where brittle deformation and rock translation are dominant, with an increasing role of the burial on the metamorphism.
NASA Astrophysics Data System (ADS)
Li, Y.; Gong, H.; Zhu, L.; Guo, L.; Gao, M.; Zhou, C.
2016-12-01
Continuous over-exploitation of groundwater causes dramatic drawdown, and leads to regional land subsidence in the Huairou Emergency Water Resources region, which is located in the up-middle part of the Chaobai river basin of Beijing. Owing to the spatial heterogeneity of strata's lithofacies of the alluvial fan, ground deformation has no significant positive correlation with groundwater drawdown, and one of the challenges ahead is to quantify the spatial distribution of strata's lithofacies. The transition probability geostatistics approach provides potential for characterizing the distribution of heterogeneous lithofacies in the subsurface. Combined the thickness of clay layer extracted from the simulation, with deformation field acquired from PS-InSAR technology, the influence of strata's lithofacies on land subsidence can be analyzed quantitatively. The strata's lithofacies derived from borehole data were generalized into four categories and their probability distribution in the observe space was mined by using the transition probability geostatistics, of which clay was the predominant compressible material. Geologically plausible realizations of lithofacies distribution were produced, accounting for complex heterogeneity in alluvial plain. At a particular probability level of more than 40 percent, the volume of clay defined was 55 percent of the total volume of strata's lithofacies. This level, equaling nearly the volume of compressible clay derived from the geostatistics, was thus chosen to represent the boundary between compressible and uncompressible material. The method incorporates statistical geological information, such as distribution proportions, average lengths and juxtaposition tendencies of geological types, mainly derived from borehole data and expert knowledge, into the Markov chain model of transition probability. Some similarities of patterns were indicated between the spatial distribution of deformation field and clay layer. In the area with roughly similar water table decline, locations in the subsurface having a higher probability for the existence of compressible material occur more than that in the location with a lower probability. Such estimate of spatial probability distribution is useful to analyze the uncertainty of land subsidence.
The exact probability distribution of the rank product statistics for replicated experiments.
Eisinga, Rob; Breitling, Rainer; Heskes, Tom
2013-03-18
The rank product method is a widely accepted technique for detecting differentially regulated genes in replicated microarray experiments. To approximate the sampling distribution of the rank product statistic, the original publication proposed a permutation approach, whereas recently an alternative approximation based on the continuous gamma distribution was suggested. However, both approximations are imperfect for estimating small tail probabilities. In this paper we relate the rank product statistic to number theory and provide a derivation of its exact probability distribution and the true tail probabilities. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Ye, Tao; Wang, Yao; Guo, Zhixing; Li, Yijia
2017-01-01
The contribution of factors including fuel type, fire-weather conditions, topography and human activity to fire regime attributes (e.g. fire occurrence, size distribution and severity) has been intensively discussed. The relative importance of those factors in explaining the burn probability (BP), which is critical in terms of fire risk management, has been insufficiently addressed. Focusing on a subtropical coniferous forest with strong human disturbance in East China, our main objective was to evaluate and compare the relative importance of fuel composition, topography, and human activity for fire occurrence, size and BP. Local BP distribution was derived with stochastic fire simulation approach using detailed historical fire data (1990-2010) and forest-resource survey results, based on which our factor contribution analysis was carried out. Our results indicated that fuel composition had the greatest relative importance in explaining fire occurrence and size, but human activity explained most of the variance in BP. This implies that the influence of human activity is amplified through the process of overlapping repeated ignition and spreading events. This result emphasizes the status of strong human disturbance in local fire processes. It further confirms the need for a holistic perspective on factor contribution to fire likelihood, rather than focusing on individual fire regime attributes, for the purpose of fire risk management.
Van Wijk, Eduard P A; Van Wijk, Roeland; Bajpai, Rajendra P
2008-05-01
Research on human ultra-weak photon emission (UPE) has suggested a typical human emission anatomic percentage distribution pattern. It was demonstrated that emission intensities are lower in long-term practitioners of meditation as compared to control subjects. The percent contribution of emission from different anatomic locations was not significantly different for meditation practitioners and control subjects. Recently, a procedure was developed to analyze the fluctuations in the signals by measuring probabilities of detecting different numbers of photons in a bin and correct these for background noise. The procedure was tested utilizing the signal from three different body locations of a single subject, demonstrating that probabilities have non-classical features and are well described by the signal in a coherent state from the three body sites. The values indicate that the quantum state of photon emitted by the subject could be a coherent state in the subject being investigated. The objective in the present study was to systematically quantify, in subjects with long-term meditation experience and subjects without this experience, the photon count distribution of 12 different locations. Data show a variation in quantum state parameters within each individual subject as well as variation in quantum state parameters between the groups.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1999-01-01
Statistical aspects of major (intense) hurricanes, those of category 3 or higher on the Saffir-Simpson scale (e.g., having a maximum sustained wind speed of greater than or equal to 50 M s (exp -1)), in the Atlantic basin during the interval of 1950-1998 are investigated in relation to the El Nino-Southern Oscillation cycle and to the postulated "more" versus "less" activity modes for intense hurricane activity. Based on Poisson statistics, when the hurricane season is simply classified as "non-El Nino-related" (NENR), the probability of having three or more intense hurricanes is approx. 53%, while it is only approx. 14% when it is classified as "El Nino-related" (ENR). Including the activity levels ("more" versus "less"), the probability of having three or more intense hurricanes is computed to be approx. 71% for the "more-NENR" season, 30% for the "less-NENR" season, 17% for the "more-ENR" season, and 12% for the "less-ENR" season. Because the 1999 hurricane season is believed to be a "more-NENR" season, the number of intense hurricanes forming in the Atlantic basin should be above average in number, probably about 4 plus or minus 1 or higher.
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
NASA Astrophysics Data System (ADS)
Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing
2017-11-01
In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.
Blaustein, R A; Dao, Thanh H; Pachepsky, Y A; Shelton, D R
2017-05-01
Limited information exists on the unhindered release of bioactive phosphorus (P) from a manure layer to model the partitioning and transport of component P forms before they reach an underlying soil. Rain simulations were conducted to quantify effects of intensity (30, 60, and 90 mm h -1 ) on P release from an application of 60 Mg ha -1 of dairy manure. Runoff contained water-extractable- (WEP), exchangeable and enzyme-labile bioactive P (TBIOP), in contrast to the operationally defined "dissolved-reactive P" form. The released P concentrations and flow-weighed mass loads were described by the log-normal probability density function. At a reference condition of 30 mm h -1 and maintaining the surface at a 5% incline, runoff was minimal, and WEP accounted for 20.9% of leached total P (TP) concentrations, with an additional 25-30% as exchangeable and enzyme-labile bioactive P over the 1-h simulation. On a 20% incline, increased intensity accelerated occurrence of concentration max and shifted the skewed P concentration distribution more to the left. Differences in trends of WEP, TBIOP, or net enzyme-labile P (PHP o ) cumulative mass released per unit mass of manure between intensities were attributable to the higher frequency of raindrops striking the manure layer, thus increasing detachment and load of colloidal PHP o of the water phases. Thus, detailed knowledge of manure physical characteristics, bioactive P distribution in relation to rain intensity, and attainment of steady-state of water fluxes were critical factors in improved prediction of partitioning and movement of manure-borne P under rainfall. Published by Elsevier Ltd.
Monte Carlo simulation of a photodisintegration of 3 H experiment in Geant4
NASA Astrophysics Data System (ADS)
Gray, Isaiah
2013-10-01
An upcoming experiment involving photodisintegration of 3 H at the High Intensity Gamma-Ray Source facility at Duke University has been simulated in the software package Geant4. CAD models of silicon detectors and wire chambers were imported from Autodesk Inventor using the program FastRad and the Geant4 GDML importer. Sensitive detectors were associated with the appropriate logical volumes in the exported GDML file so that changes in detector geometry will be easily manifested in the simulation. Probability distribution functions for the energy and direction of outgoing protons were generated using numerical tables from previous theory, and energies and directions were sampled from these distributions using a rejection sampling algorithm. The simulation will be a useful tool to optimize detector geometry, estimate background rates, and test data analysis algorithms. This work was supported by the Triangle Universities Nuclear Laboratory REU program at Duke University.
NASA Astrophysics Data System (ADS)
Frémont, F.
2015-05-01
A classical model based on the resolution of Hamilton equations of motion is used to determine the angular distribution of H projectiles following single-electron capture in H++H collisions at an incident projectile energy of 250 eV. At such low energies, the experimental charge-exchange probability and angular differential cross sections exhibit oscillatory structures that are classically related to the number of swaps the electron experiences between the target and the projectile during the collision. These oscillations are well reproduced by models based on quantum mechanics. In the present paper, the angular distribution of H projectiles is determined classically, at angles varying from 0.1° up to 7°. The variation in intensity due to interferences caused by the indiscernibility between different trajectories is calculated, and the role of these interferences is discussed.
Temporal intermittency of energy dissipation in magnetohydrodynamic turbulence.
Zhdankin, Vladimir; Uzdensky, Dmitri A; Boldyrev, Stanislav
2015-02-13
Energy dissipation in magnetohydrodynamic (MHD) turbulence is known to be highly intermittent in space, being concentrated in sheetlike coherent structures. Much less is known about intermittency in time, another fundamental aspect of turbulence which has great importance for observations of solar flares and other space or astrophysical phenomena. In this Letter, we investigate the temporal intermittency of energy dissipation in numerical simulations of MHD turbulence. We consider four-dimensional spatiotemporal structures, "flare events," responsible for a large fraction of the energy dissipation. We find that although the flare events are often highly complex, they exhibit robust power-law distributions and scaling relations. We find that the probability distribution of dissipated energy has a power-law index close to α≈1.75, similar to observations of solar flares, indicating that intense dissipative events dominate the heating of the system. We also discuss the temporal asymmetry of flare events as a signature of the turbulent cascade.
Probabilistic tsunami hazard analysis: Multiple sources and global applications
Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël; Parsons, Thomas E.; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie
2017-01-01
Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.
Probabilistic Tsunami Hazard Analysis: Multiple Sources and Global Applications
NASA Astrophysics Data System (ADS)
Grezio, Anita; Babeyko, Andrey; Baptista, Maria Ana; Behrens, Jörn; Costa, Antonio; Davies, Gareth; Geist, Eric L.; Glimsdal, Sylfest; González, Frank I.; Griffin, Jonathan; Harbitz, Carl B.; LeVeque, Randall J.; Lorito, Stefano; Løvholt, Finn; Omira, Rachid; Mueller, Christof; Paris, Raphaël.; Parsons, Tom; Polet, Jascha; Power, William; Selva, Jacopo; Sørensen, Mathilde B.; Thio, Hong Kie
2017-12-01
Applying probabilistic methods to infrequent but devastating natural events is intrinsically challenging. For tsunami analyses, a suite of geophysical assessments should be in principle evaluated because of the different causes generating tsunamis (earthquakes, landslides, volcanic activity, meteorological events, and asteroid impacts) with varying mean recurrence rates. Probabilistic Tsunami Hazard Analyses (PTHAs) are conducted in different areas of the world at global, regional, and local scales with the aim of understanding tsunami hazard to inform tsunami risk reduction activities. PTHAs enhance knowledge of the potential tsunamigenic threat by estimating the probability of exceeding specific levels of tsunami intensity metrics (e.g., run-up or maximum inundation heights) within a certain period of time (exposure time) at given locations (target sites); these estimates can be summarized in hazard maps or hazard curves. This discussion presents a broad overview of PTHA, including (i) sources and mechanisms of tsunami generation, emphasizing the variety and complexity of the tsunami sources and their generation mechanisms, (ii) developments in modeling the propagation and impact of tsunami waves, and (iii) statistical procedures for tsunami hazard estimates that include the associated epistemic and aleatoric uncertainties. Key elements in understanding the potential tsunami hazard are discussed, in light of the rapid development of PTHA methods during the last decade and the globally distributed applications, including the importance of considering multiple sources, their relative intensities, probabilities of occurrence, and uncertainties in an integrated and consistent probabilistic framework.
Vaudaux, Catherine; Schneider, Uwe; Kaser-Hotz, Barbara
2007-01-01
We evaluated the impact of inverse planned intensity-modulated radiation therapy (IMRT) on the dose-volume histograms (DVHs) and on the normal tissue complication probabilities (NTCPs) of brain and eyes in dogs with nasal tumors. Nine dogs with large, caudally located nasal tumors were planned using conventional techniques and inverse planned IMRT for a total prescribed dose of 52.5 Gy in 3.5 Gy fractions. The equivalent uniform dose for brain and eyes was calculated to estimate the normal tissue complication probability (NTCP) of these organs. The NTCP values as well as the DVHs were used to compare the treatment plans. The dose distribution in IMRT plans was more conformal than in conventional plans. The average dose delivered to one-third of the brain was 10 Gy lower with the IMRT plan compared with conventional planning. The mean partial brain volume receiving 43.6 Gy or more was reduced by 25.6% with IMRT. As a consequence, the NTCPs were also significantly lower in the IMRT plans. The mean NTCP of brain was two times lower and at least one eye could be saved in all patients planed with IMRT. Another possibility with IMRT is dose escalation in the target to improve tumor control while keeping the NTCPs at the same level as for conventional planning. Veterinary
Park, Sung Ho; Park, Suk Won; Oh, Do Hoon; Choi, Youngmin; Kim, Jeung Kee; Ahn, Yong Chan; Park, Won; Suh, Hyun Sook; Lee, Rena; Bae, Hoonsik
2009-01-01
The intensity-modulated radiation therapy (IMRT) planning strategies for nasopharyngeal cancer among Korean radiation oncology facilities were investigated. Five institutions with IMRT planning capacity using the same planning system were invited to participate in this study. The institutions were requested to produce the best plan possible for 2 cases that would deliver 70 Gy to the planning target volume of gross tumor (PTV1), 59.4 Gy to the PTV2, and 51.5 Gy to the PTV3 in which elective irradiation was required. The advised fractionation number was 33. The planning parameters, resultant dose distributions, and biological indices were compared. We found 2-3-fold variations in the volume of treatment targets. Similar degree of variation was found in the delineation of normal tissue. The physician-related factors in IMRT planning had more influence on the plan quality. The inhomogeneity index of PTV dose ranged from 4 to 49% in Case 1, and from 5 to 46% in Case 2. Variation in tumor control probabilities for the primary lesion and involved LNs was less marked. Normal tissue complication probabilities for parotid glands and skin showed marked variation. Results from this study suggest that greater efforts in providing training and continuing education in terms of IMRT planning parameters usually set by physician are necessary for the successful implementation of IMRT. PMID:19399266
NASA Astrophysics Data System (ADS)
Salis, M.; Ager, A.; Arca, B.; Finney, M.; Bacciu, V. M.; Spano, D.; Duce, P.
2012-12-01
Spatial and temporal patterns of fire spread and behavior are dependent on interactions among climate, topography, vegetation and fire suppression efforts (Pyne et al. 1996; Viegas 2006; Falk et al. 2007). Humans also play a key role in determining frequency and spatial distribution of ignitions (Bar Massada et al, 2011), and thus influence fire regimes as well. The growing incidence of catastrophic wildfires has led to substantial losses for important ecological and human values within many areas of the Mediterranean basin (Moreno et al. 1998; Mouillot et al. 2005; Viegas et al. 2006a; Riaño et al. 2007). The growing fire risk issue has led to many new programs and policies of fuel management and risk mitigation by environmental and fire agencies. However, risk-based methodologies to help identify areas characterized by high potential losses and prioritize fuel management have been lacking for the region. Formal risk assessment requires the joint consideration of likelihood, intensity, and susceptibility, the product of which estimates the chance of a specific loss (Brillinger 2003; Society of Risk Analysis, 2006). Quantifying fire risk therefore requires estimates of a) the probability of a specific location burning at a specific intensity and location, and b) the resulting change in financial or ecological value (Finney 2005; Scott 2006). When large fires are the primary cause of damage, the application of this risk formulation requires modeling fire spread to capture landscape properties that affect burn probability. Recently, the incorporation of large fire spread into risk assessment systems has become feasible with the development of high performance fire simulation systems (Finney et al. 2011) that permit the simulation of hundreds of thousands of fires to generate fine scale maps of burn probability, flame length, and fire size, while considering the combined effects of weather, fuels, and topography (Finney 2002; Andrews et al. 2007; Ager and Finney 2009; Finney et al. 2009; Salis et al. 2012 accepted). In this work, we employed wildfire simulation methods to quantify wildfire exposure to human and ecological values for the island of Sardinia, Italy. The work was focused on the risk and exposure posed by large fires (e.g. 100 - 10,000 ha), and considers historical weather, ignition patterns and fuels. We simulated 100,000 fires using burn periods that replicated the historical size distribution on the Island, and an ignition probability grid derived from historic ignition data. We then examine spatial variation in three exposure components (burn probability, flame length, fire size) among important human and ecological values. The results allowed us to contract exposure among and within the various features examined, and highlighted the importance of human factors in shaping wildfire exposure in Sardinia. The work represents the first application of burn probability modeling in the Mediterranean region, and sets the stage for expanded work in the region to quantify risk from large fires
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
NASA Astrophysics Data System (ADS)
Moernaut, J.; Van Daele, M.; Fontijn, K.; Heirman, K.; Kempf, P.; Pino, M.; Valdebenito, G.; Urrutia, R.; Strasser, M.; De Batist, M.
2018-01-01
Historical and paleoseismic records in south-central Chile indicate that giant earthquakes on the subduction megathrust - such as in AD1960 (Mw 9.5) - reoccur on average every ∼300 yr. Based on geodetic calculations of the interseismic moment accumulation since AD1960, it was postulated that the area already has the potential for a Mw 8 earthquake. However, to estimate the probability of such a great earthquake to take place in the short term, one needs to frame this hypothesis within the long-term recurrence pattern of megathrust earthquakes in south-central Chile. Here we present two long lacustrine records, comprising up to 35 earthquake-triggered turbidites over the last 4800 yr. Calibration of turbidite extent with historical earthquake intensity reveals a different macroseismic intensity threshold (≥VII1/2 vs. ≥VI1/2) for the generation of turbidites at the coring sites. The strongest earthquakes (≥VII1/2) have longer recurrence intervals (292 ±93 yrs) than earthquakes with intensity of ≥VI1/2 (139 ± 69yr). Moreover, distribution fitting and the coefficient of variation (CoV) of inter-event times indicate that the stronger earthquakes recur in a more periodic way (CoV: 0.32 vs. 0.5). Regional correlation of our multi-threshold shaking records with coastal paleoseismic data of complementary nature (tsunami, coseismic subsidence) suggests that the intensity ≥VII1/2 events repeatedly ruptured the same part of the megathrust over a distance of at least ∼300 km and can be assigned to Mw ≥ 8.6. We hypothesize that a zone of high plate locking - identified by geodetic studies and large slip in AD 1960 - acts as a dominant regional asperity, on which elastic strain builds up over several centuries and mostly gets released in quasi-periodic great and giant earthquakes. Our paleo-records indicate that Poissonian recurrence models are inadequate to describe large megathrust earthquake recurrence in south-central Chile. Moreover, they show an enhanced probability for a Mw 7.7-8.5 earthquake during the next 110 years whereas the probability for a Mw ≥ 8.6 (AD1960-like) earthquake remains low in this period.
On the importance of incorporating sampling weights in ...
Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
The mean intensity of radiation at 2 microns in the solar neighborhood
NASA Technical Reports Server (NTRS)
Jura, M.
1979-01-01
Consideration is given to the value of the mean intensity at 2 microns in the solar neighborhood, and it is found that it is likely to be a factor of four greater than previously estimated on theoretical grounds. It is noted however, that the estimate does agree with a reasonable extrapolation of the results of the survey of the Galactic plane by the Japanese group. It is concluded that the mean intensity in the solar neighborhood therefore probably peaks somewhat longward of 1 micron, and that this result is important for understanding the temperature of interstellar dust and the intensity of the far infrared background. This means specifically that dark clouds probably emit significantly more far infrared radiation than previously predicted.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
NASA Astrophysics Data System (ADS)
Lozovatsky, I.; Fernando, H. J. S.; Planella-Morato, J.; Liu, Zhiyu; Lee, J.-H.; Jinadasa, S. U. P.
2017-10-01
The probability distribution of turbulent kinetic energy dissipation rate in stratified ocean usually deviates from the classic lognormal distribution that has been formulated for and often observed in unstratified homogeneous layers of atmospheric and oceanic turbulence. Our measurements of vertical profiles of micro-scale shear, collected in the East China Sea, northern Bay of Bengal, to the south and east of Sri Lanka, and in the Gulf Stream region, show that the probability distributions of the dissipation rate ɛ˜r in the pycnoclines (r ˜ 1.4 m is the averaging scale) can be successfully modeled by the Burr (type XII) probability distribution. In weakly stratified boundary layers, lognormal distribution of ɛ˜r is preferable, although the Burr is an acceptable alternative. The skewness Skɛ and the kurtosis Kɛ of the dissipation rate appear to be well correlated in a wide range of Skɛ and Kɛ variability.
Tables of stark level transition probabilities and branching ratios in hydrogen-like atoms
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
The transition probabilities which are given in terms of n prime k prime and n k are tabulated. No additional summing or averaging is necessary. The electric quantum number k plays the role of the angular momentum quantum number l in the presence of an electric field. The branching ratios between stark levels are also tabulated. Necessary formulas for the transition probabilities and branching ratios are given. Symmetries are discussed and selection rules are given. Some disagreements for some branching ratios are found between the present calculation and the measurement of Mark and Wierl. The transition probability multiplied by the statistical weight of the initial state is called the static intensity J sub S, while the branching ratios are called the dynamic intensity J sub D.
Yoneyama, Takeshi; Watanabe, Tetsuyo; Kagawa, Hiroyuki; Hayashi, Yutaka; Nakada, Mitsutoshi
2017-03-01
In photodynamic diagnosis using 5-aminolevulinic acid (5-ALA), discrimination between the tumor and normal tissue is very important for a precise resection. However, it is difficult to distinguish between infiltrating tumor and normal regions in the boundary area. In this study, fluorescent intensity and bright spot analyses using a confocal microscope is proposed for the precise discrimination between infiltrating tumor and normal regions. From the 5-ALA-resected brain tumor tissue, the red fluorescent and marginal regions were sliced for observation under a confocal microscope. Hematoxylin and eosin (H&E) staining were performed on serial slices of the same tissue. According to the pathological inspection of the H&E slides, the tumor and infiltrating and normal regions on confocal microscopy images were investigated. From the fluorescent intensity of the image pixels, a histogram of pixel number with the same fluorescent intensity was obtained. The fluorescent bright spot sizes and total number were compared between the marginal and normal regions. The fluorescence intensity distribution and average intensity in the tumor were different from those in the normal region. The probability of a difference from the dark enhanced the difference between the tumor and the normal region. The bright spot size and number in the infiltrating tumor were different from those in the normal region. Fluorescence intensity analysis is useful to distinguish a tumor region, and a bright spot analysis is useful to distinguish between infiltrating tumor and normal regions. These methods will be important for the precise resection or photodynamic therapy of brain tumors. Copyright © 2016 Elsevier B.V. All rights reserved.
Disk Disruptions and X-ray Intensity Excursions in Cyg X-2, LMC X-3 and Cyg X-3
NASA Astrophysics Data System (ADS)
Boyd, P. T.; Smale, A. P.
2001-05-01
The RXTE All Sky Monitor soft X-ray light curves of many X-ray binaries show long-term intensity variations (a.k.a "superorbital periodicities") that have been ascribed to precession of a warped, tilted accretion disk around the X-ray source. We have found that the excursion times between X-ray minima in Cyg X-2 can be characterized as a series of integer multiples of the 9.8 binary orbital period, (as opposed to the previously reported stable 77.7 day single periodicity, or a single modulation whose period changes slowly with time). While the data set is too short for a proper statistical analysis, it is clear that the length of any given intensity excursion cannot be used to predict the next (integer) excursion length in the series. In the black hole candidate system LMC X-3, the excursion times are shown to be related to each other by rational fractions. We find that the long term light curve of the unusual galactic X-ray jet source Cyg X-3 can also be described as a series of intensity excursions related to each other by integer multiples of a fundamental underlying clock. In the latter cases, the clock is apparently not related to the known binary periods. A unified physical model, involving both an inclined accretion disk and a fixed-probability disk disruption mechanism is presented, and compared with three-body scattering results. Each time the disk passes through the orbital plane it experiences a fixed probability P that it will disrupt. This model has testable predictions---the distribution of integers should resemble that of an atomic process with a characteristic half life. Further analysis can support or refute the model, and shed light on what system parameters effectively set the value of P.
Intensity of Territorial Marking Predicts Wolf Reproduction: Implications for Wolf Monitoring
García, Emilio J.
2014-01-01
Background The implementation of intensive and complex approaches to monitor large carnivores is resource demanding, restricted to endangered species, small populations, or small distribution ranges. Wolf monitoring over large spatial scales is difficult, but the management of such contentious species requires regular estimations of abundance to guide decision-makers. The integration of wolf marking behaviour with simple sign counts may offer a cost-effective alternative to monitor the status of wolf populations over large spatial scales. Methodology/Principal Findings We used a multi-sampling approach, based on the collection of visual and scent wolf marks (faeces and ground scratching) and the assessment of wolf reproduction using howling and observation points, to test whether the intensity of marking behaviour around the pup-rearing period (summer-autumn) could reflect wolf reproduction. Between 1994 and 2007 we collected 1,964 wolf marks in a total of 1,877 km surveyed and we searched for the pups' presence (1,497 howling and 307 observations points) in 42 sampling sites with a regular presence of wolves (120 sampling sites/year). The number of wolf marks was ca. 3 times higher in sites with a confirmed presence of pups (20.3 vs. 7.2 marks). We found a significant relationship between the number of wolf marks (mean and maximum relative abundance index) and the probability of wolf reproduction. Conclusions/Significance This research establishes a real-time relationship between the intensity of wolf marking behaviour and wolf reproduction. We suggest a conservative cutting point of 0.60 for the probability of wolf reproduction to monitor wolves on a regional scale combined with the use of the mean relative abundance index of wolf marks in a given area. We show how the integration of wolf behaviour with simple sampling procedures permit rapid, real-time, and cost-effective assessments of the breeding status of wolf packs with substantial implications to monitor wolves at large spatial scales. PMID:24663068
1978-03-01
for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented
Grunewald, E.D.; Stein, R.S.
2006-01-01
In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.
Mesoscale thermospheric wind in response to nightside auroral brightening
NASA Astrophysics Data System (ADS)
Nishimura, T.; Zou, Y.; Gabrielse, C.; Lyons, L. R.; Varney, R. H.; Conde, M.; Hampton, D. L.; Mende, S. B.
2017-12-01
Although high-latitude ionospheric flows and thermospheric winds in the F-region are overall characterized by two-cell patterns over a global scale ( 1000 km), intense energy input from the magnetosphere often occurs in a mesoscale ( 100 km) and transient manner ( 10 min). Intense mesoscale energy input would drive enhanced mesoscale winds, whose properties are closely associated with auroral arcs and associated ionospheric flows. However, how thermospheric winds respond to and distribute around mesoscale magnetospheric input has not been characterized systematically. This presentation addresses how mesoscale winds distribute around quasi-steady arcs, evolve and distribute around transient arcs, and vary with geomagnetic and solar activity. We use Scanning Doppler Imagers (SDIs), all-sky imagers and PFISR over Alaska. A channel of azimuthal neutral wind is often found associated with localized flow channels adjacent to quasi-steady discrete aurora. The wind speed dynamically changes after a short time lag (a few tens of minutes) from auroral brightenings, including auroral streamers and intensifications on preexisting auroral arcs. This is in contrast to a much longer time lag ( 1 hour) reported previously. During a storm main phase, a coherent equatorward motion of the Harang discontinuity was seen in plasma flow, aurora and neutral wind, with a few degrees of equatorward displacement of the neutral wind Harang, which is probably due to the inertia. These results suggest that a tight M-I-T connection exists under the energy input of assorted auroral arcs and that mesoscale coupling processes are important in M-I-T energy transfer.
Project "Convective Wind Gusts" (ConWinG)
NASA Astrophysics Data System (ADS)
Mohr, Susanna; Richter, Alexandra; Kunz, Michael; Ruck, Bodo
2017-04-01
Convectively-driven strong winds usually associated with thunderstorms frequently cause substantial damage to buildings and other structures in many parts of the world. Decisive for the high damage potential are the short-term wind speed maxima with duration of a few seconds, termed as gusts. Several studies have shown that convectively-driven gusts can reach even higher wind speeds compared to turbulent gusts associated with synoptic-scale weather systems. Due to the small-scale and non-stationary nature of convective wind gusts, there is a considerable lack of knowledge regarding their characteristics and statistics. Furthermore, their interaction with urban structures and their influence on buildings is not yet fully understood. For these two reasons, convective wind events are not included in the present wind load standards of buildings and structures, which so far have been based solely on the characteristics of synoptically-driven wind gusts in the near-surface boundary layer (e. g., DIN EN 1991-1-4:2010-12; ASCE7). However, convective and turbulent gusts differ considerably, e.g. concerning vertical wind-speed profiles, gust factors (i.e., maximum to mean wind speed), or exceedance probability curves. In an effort to remedy this situation, the overarching objective of the DFG-project "Convective Wind Gusts" (ConWinG) is to investigate the characteristics and statistics of convective gusts as well as their interaction with urban structures. Based on a set of 110 climate stations of the German Weather Service (DWD) between 1992 and 2014, we analyzed the temporal and spatial distribution, intensity, and occurrence probability of convective gusts. Similar to thunderstorm activity, the frequency of convective gusts decreases gradually from South to North Germany. A relation between gust intensity/probability to orography or climate conditions cannot be identified. Rather, high wind speeds, e.g., above 30 m/s, can be expected everywhere in Germany with almost similar occurrence probabilities. A laboratory experiment with an impinging jet simulating the downdraft was performed to investigate the propagation of a gust within built environment. The aim is to investigate the interaction of the resulting convective gusts along the near-surface layers with different urban structures - from single street canyons up to more complex block array structures. It was shown that high velocities are conserved within street canyons over longer distances compared to open terrain conditions. In addition, the experiments revealed the ratio of building height to downdraft size as a crucial factor with regard to vertical velocities at roof level and the pressure distribution on the facades.
NASA Astrophysics Data System (ADS)
Murphy, Sheila F.; Writer, Jeffrey H.; Blaine McCleskey, R.; Martin, Deborah A.
2015-08-01
Storms following wildfires are known to impair drinking water supplies in the southwestern United States, yet our understanding of the role of precipitation in post-wildfire water quality is far from complete. We quantitatively assessed water-quality impacts of different hydrologic events in the Colorado Front Range and found that for a three-year period, substantial hydrologic and geochemical responses downstream of a burned area were primarily driven by convective storms with a 30 min rainfall intensity >10 mm h-1. These storms, which typically occur several times each year in July-September, are often small in area, short-lived, and highly variable in intensity and geographic distribution. Thus, a rain gage network with high temporal resolution and spatial density, together with high-resolution stream sampling, are required to adequately characterize post-wildfire responses. We measured total suspended sediment, dissolved organic carbon (DOC), nitrate, and manganese concentrations that were 10-156 times higher downstream of a burned area compared to upstream during relatively common (50% annual exceedance probability) rainstorms, and water quality was sufficiently impaired to pose water-treatment concerns. Short-term water-quality impairment was driven primarily by increased surface runoff during higher intensity convective storms that caused erosion in the burned area and transport of sediment and chemical constituents to streams. Annual sediment yields downstream of the burned area were controlled by storm events and subsequent remobilization, whereas DOC yields were closely linked to annual runoff and thus were more dependent on interannual variation in spring runoff. Nitrate yields were highest in the third year post-wildfire. Results from this study quantitatively demonstrate that water quality can be altered for several years after wildfire. Because the southwestern US is prone to wildfires and high-intensity rain storms, the role of storms in post-wildfire water-quality impacts must be considered when assessing water-quality vulnerability.
Murphy, Sheila F.; Writer, Jeffrey H.; McCleskey, R. Blaine; Martin, Deborah A.
2015-01-01
Storms following wildfires are known to impair drinking water supplies in the southwestern United States, yet our understanding of the role of precipitation in post-wildfire water quality is far from complete. We quantitatively assessed water-quality impacts of different hydrologic events in the Colorado Front Range and found that for a three-year period, substantial hydrologic and geochemical responses downstream of a burned area were primarily driven by convective storms with a 30 min rainfall intensity >10 mm h−1. These storms, which typically occur several times each year in July–September, are often small in area, short-lived, and highly variable in intensity and geographic distribution. Thus, a rain gage network with high temporal resolution and spatial density, together with high-resolution stream sampling, are required to adequately characterize post-wildfire responses. We measured total suspended sediment, dissolved organic carbon (DOC), nitrate, and manganese concentrations that were 10–156 times higher downstream of a burned area compared to upstream during relatively common (50% annual exceedance probability) rainstorms, and water quality was sufficiently impaired to pose water-treatment concerns. Short-term water-quality impairment was driven primarily by increased surface runoff during higher intensity convective storms that caused erosion in the burned area and transport of sediment and chemical constituents to streams. Annual sediment yields downstream of the burned area were controlled by storm events and subsequent remobilization, whereas DOC yields were closely linked to annual runoff and thus were more dependent on interannual variation in spring runoff. Nitrate yields were highest in the third year post-wildfire. Results from this study quantitatively demonstrate that water quality can be altered for several years after wildfire. Because the southwestern US is prone to wildfires and high-intensity rain storms, the role of storms in post-wildfire water-quality impacts must be considered when assessing water-quality vulnerability.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Hough, Susan E.
2013-01-01
Recent parallel development of improved quantitative methods to analyze intensity distributions for historical earthquakes and of web‐based systems for collecting intensity data for modern earthquakes provides an opportunity to reconsider not only important individual historical earthquakes but also the overall characterization of intensity distributions for historical events. The focus of this study is a comparison between intensity distributions of historical earthquakes with those from modern earthquakes for which intensities have been determined by the U.S. Geological Survey “Did You Feel It?” (DYFI) website (see Data and Resources). As an example of a historical earthquake, I focus initially on the 1843 Marked Tree, Arkansas, event. Its magnitude has been previously estimated as 6.0–6.2. I first reevaluate the macroseismic effects of this earthquake, assigning intensities using a traditional approach, and estimate a preferred magnitude of 5.4. Modified Mercalli intensity (MMI) values for the Marked Tree earthquake are higher, on average, than those from the 2011 >Mw 5.8 Mineral, Virginia, earthquake for distances ≤500 km but comparable or lower on average at larger distances, with a smaller overall felt extent. Intensity distributions for other moderate historical earthquakes reveal similar discrepancies; the discrepancy is also even more pronounced using earlier published intensities for the 1843 earthquake. I discuss several hypotheses to explain the discrepancies, including the possibility that intensity values associated with historical earthquakes are commonly inflated due to reporting/sampling biases. A detailed consideration of the DYFI intensity distribution for the Mineral earthquake illustrates how reporting and sampling biases can account for historical earthquake intensity biases as high as two intensity units and for the qualitative difference in intensity distance decays for modern versus historical events. Thus, intensity maps for historical earthquakes tend to imply more widespread damage patterns than are revealed by intensity distributions of modern earthquakes of comparable magnitude. However, intensity accounts of historical earthquakes often include fragmentary accounts suggesting long‐period shaking effects that will likely not be captured fully in historical intensity distributions.
Fault Specific Seismic Hazard Maps as Input to Loss Reserves Calculation for Attica Buildings
NASA Astrophysics Data System (ADS)
Deligiannakis, Georgios; Papanikolaou, Ioannis; Zimbidis, Alexandros; Roberts, Gerald
2014-05-01
Greece is prone to various natural disasters, such as wildfires, floods, landslides and earthquakes, due to the special environmental and geological conditions dominating in tectonic plate boundaries. Seismic is the predominant risk, in terms of damages and casualties in the Greek territory. The historical record of earthquakes in Greece has been published from various researchers, providing useful data in seismic hazard assessment of Greece. However, the completeness of the historical record in Greece, despite being one of the longest worldwide, reaches only 500 years for M ≥ 7.3 and less than 200 years for M ≥ 6.5. Considering that active faults in the area have recurrence intervals of a few hundred to several thousands of years, it is clear that many active faults have not been activated during the completeness period covered by the historical records. New Seismic Hazard Assessment methodologies tend to follow fault specific approaches where seismic sources are geologically constrained active faults, in order to address problems related to the historical records incompleteness, obtain higher spatial resolution and calculate realistic source locality distances, since seismic sources are very accurately located. Fault specific approaches provide quantitative assessments as they measure fault slip rates from geological data, providing a more reliable estimate of seismic hazard. We used a fault specific seismic hazard assessment approach for the region of Attica. The method of seismic hazard mapping from geological fault throw-rate data combined three major factors: Empirical data which combine fault rupture lengths, earthquake magnitudes and coseismic slip relationships. The radiuses of VI, VII, VIII and IX isoseismals on the Modified Mercalli (MM) intensity scale. Attenuation - amplification functions for seismic shaking on bedrock compared to basin filling sediments. We explicitly modeled 22 active faults that could affect the region of Attica, including Athens, using detailed data derived from published papers, neotectonic maps and fieldwork observations. Moreover, we incorporated background seismicity models from the historic record and also the subduction zone earthquakes distribution, for the integration of strong deep earthquakes that could also affect Attica region. We created 4 high spatial resolution seismic hazard maps for the region of Attica, one for each of the intensities VII - X (MM). These maps offer a locality specific shaking recurrence record, which represents the long-term shaking record in a more complete way, since they incorporate several seismic cycles of the active faults that could affect Attica. Each one of these high resolution seismic hazard maps displays both the spatial distribution and the recurrence, over a specific time period, of the relevant intensity. Time - independent probabilities were extracted based on these average recurrence intervals, using the stationary Poisson model P = 1 -e-Λt. The 'Λ' value was provided by the intensities recurrence, as displayed in the seismic hazard maps. However, the insurance contracts usually lack of detailed spatial information and they refer to Postal Codes level, akin to CRESTA zones. To this end, a time-independent probability of shaking at intensities VII - X was calculated for every Postal Code, for a given time period, using the Poisson model. The reserves calculation on buildings portfolio combines the probability of events of specific intensities within the Postal Codes, with the buildings characteristics, such as the building construction type and the insured value. We propose a standard approach for the reserves calculation K(t) for a specific time period: K (t) = x2 ·[x1 ·y1 ·P1(t) + x1 ·y2 ·P2(t) + x1 ·y3 ·P3(t) + x1 ·y4 ·P4(t)] x1 which is a function of the probabilities of occurrence for the seismic intensities VII - X (P1(t) -P4(t)) for the same period, the value of the building x1, the insured value x2 and the characteristics of the building, such as the construction type, age, height and use of property (y1 - y4). Furthermore a stochastic approach is also adopted in order to obtain the relevant reserve value K(t) for the specific time period. This calculation considers a set of simulations from the Poisson random variable and then taking the respective expectations.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
NASA Astrophysics Data System (ADS)
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a convex function: the profit intensity reaches its maximum when the probability of transaction is given by the golden ratio rule (\\sqrt {5}-1)/{2} . This condition sets a sharp criterion of validity of the model and can be tested with real market data.
Free space optical ultra-wideband communications over atmospheric turbulence channels.
Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu
2010-08-02
A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.
External noise-induced transitions in a current-biased Josephson junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qiongwei; Xue, Changfeng, E-mail: cfxue@163.com; Tang, Jiashi
We investigate noise-induced transitions in a current-biased and weakly damped Josephson junction in the presence of multiplicative noise. By using the stochastic averaging procedure, the averaged amplitude equation describing dynamic evolution near a constant phase difference is derived. Numerical results show that a stochastic Hopf bifurcation between an absorbing and an oscillatory state occurs. This means the external controllable noise triggers a transition into the non-zero junction voltage state. With the increase of noise intensity, the stationary probability distribution peak shifts and is characterised by increased width and reduced height. And the different transition rates are shown for large andmore » small bias currents.« less
Infrared observations of Jovian aurora from Juno's first orbits: Main oval and satellite footprints
NASA Astrophysics Data System (ADS)
Mura, A.; Adriani, A.; Altieri, F.; Connerney, J. E. P.; Bolton, S. J.; Moriconi, M. L.; Gérard, J.-C.; Kurth, W. S.; Dinelli, B. M.; Fabiano, F.; Tosi, F.; Atreya, S. K.; Bagenal, F.; Gladstone, G. R.; Hansen, C.; Levin, S. M.; Mauk, B. H.; McComas, D. J.; Sindoni, G.; Filacchione, G.; Migliorini, A.; Grassi, D.; Piccioni, G.; Noschese, R.; Cicchetti, A.; Turrini, D.; Stefani, S.; Amoroso, M.; Olivieri, A.
2017-06-01
The Jovian Infrared Auroral Mapper (JIRAM) is an imager/spectrometer on board NASA/Juno mission for the study of the Jovian aurorae. The first results of JIRAM's imager channel observations of the H3+ infrared emission, collected around the first Juno perijove, provide excellent spatial and temporal distribution of the Jovian aurorae, and show the morphology of the main ovals, the polar regions, and the footprints of Io, Europa and Ganymede. The extended Io "tail" persists for 3 h after the passage of the satellite flux tube. Multi-arc structures of varied spatial extent appear in both main auroral ovals. Inside the main ovals, intense, localized emissions are observed. In the southern aurora, an evident circular region of strong depletion of H3+ emissions is partially surrounded by an intense emission arc. The southern aurora is brighter than the north one in these observations. Similar, probably conjugate emission patterns are distinguishable in both polar regions.
ERIC Educational Resources Information Center
Conant, Darcy Lynn
2013-01-01
Stochastic understanding of probability distribution undergirds development of conceptual connections between probability and statistics and supports development of a principled understanding of statistical inference. This study investigated the impact of an instructional course intervention designed to support development of stochastic…
NASA Astrophysics Data System (ADS)
Nystuen, Jeffrey A.; Amitai, Eyal
2003-04-01
The underwater sound generated by raindrop splashes on a water surface is loud and unique allowing detection, classification and quantification of rainfall. One of the advantages of the acoustic measurement is that the listening area, an effective catchment area, is proportional to the depth of the hydrophone and can be orders of magnitude greater than other in situ rain gauges. This feature allows high temporal resolution of the rainfall measurement. A series of rain events with extremely high rainfall rates, over 100 mm/hr, is examined acoustically. Rapid onset and cessation of rainfall intensity are detected within the convective cells of these storms with maximum 5-s resolution values exceeding 1000 mm/hr. The probability distribution functions (pdf) for rainfall rate occurrence and water volume using the longer temporal resolutions typical of other instruments do not include these extreme values. The variance of sound intensity within different acoustic frequency bands can be used as an aid to classify rainfall type. Objective acoustic classification algorithms are proposed. Within each rainfall classification the relationship between sound intensity and rainfall rate is nearly linear. The reflectivity factor, Z, also has a linear relationship with rainfall rate, R, for each rainfall classification.
Correlated noise-based switches and stochastic resonance in a bistable genetic regulation system
NASA Astrophysics Data System (ADS)
Wang, Can-Jun; Yang, Ke-Li
2016-07-01
The correlated noise-based switches and stochastic resonance are investigated in a bistable single gene switching system driven by an additive noise (environmental fluctuations), a multiplicative noise (fluctuations of the degradation rate). The correlation between the two noise sources originates from on the lysis-lysogeny pathway system of the λ phage. The steady state probability distribution is obtained by solving the time-independent Fokker-Planck equation, and the effects of noises are analyzed. The effects of noises on the switching time between the two stable states (mean first passage time) is investigated by the numerical simulation. The stochastic resonance phenomenon is analyzed by the power amplification factor. The results show that the multiplicative noise can induce the switching from "on" → "off" of the protein production, while the additive noise and the correlation between the noise sources can induce the inverse switching "off" → "on". A nonmonotonic behaviour of the average switching time versus the multiplicative noise intensity, for different cross-correlation and additive noise intensities, is observed in the genetic system. There exist optimal values of the additive noise, multiplicative noise and cross-correlation intensities for which the weak signal can be optimal amplified.
Probability distributions of the electroencephalogram envelope of preterm infants.
Saji, Ryoya; Hirasawa, Kyoko; Ito, Masako; Kusuda, Satoshi; Konishi, Yukuo; Taga, Gentaro
2015-06-01
To determine the stationary characteristics of electroencephalogram (EEG) envelopes for prematurely born (preterm) infants and investigate the intrinsic characteristics of early brain development in preterm infants. Twenty neurologically normal sets of EEGs recorded in infants with a post-conceptional age (PCA) range of 26-44 weeks (mean 37.5 ± 5.0 weeks) were analyzed. Hilbert transform was applied to extract the envelope. We determined the suitable probability distribution of the envelope and performed a statistical analysis. It was found that (i) the probability distributions for preterm EEG envelopes were best fitted by lognormal distributions at 38 weeks PCA or less, and by gamma distributions at 44 weeks PCA; (ii) the scale parameter of the lognormal distribution had positive correlations with PCA as well as a strong negative correlation with the percentage of low-voltage activity; (iii) the shape parameter of the lognormal distribution had significant positive correlations with PCA; (iv) the statistics of mode showed significant linear relationships with PCA, and, therefore, it was considered a useful index in PCA prediction. These statistics, including the scale parameter of the lognormal distribution and the skewness and mode derived from a suitable probability distribution, may be good indexes for estimating stationary nature in developing brain activity in preterm infants. The stationary characteristics, such as discontinuity, asymmetry, and unimodality, of preterm EEGs are well indicated by the statistics estimated from the probability distribution of the preterm EEG envelopes. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lylova, A. N.; Sheldakova, Yu. V.; Kudryashov, A. V.; Samarkin, V. V.
2018-01-01
We consider the methods for modelling doughnut and super-Gaussian intensity distributions in the far field by means of deformable bimorph mirrors. A method for the rapid formation of a specified intensity distribution using a Shack - Hartmann sensor is proposed, and the results of the modelling of doughnut and super-Gaussian intensity distributions are presented.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
14 CFR 23.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Equipment Lights § 23.1389 Position light distribution and intensities. (a) General. The intensities prescribed in this section must be provided by new equipment with each light cover and color filter in place... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Position light distribution and intensities...
14 CFR 23.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Equipment Lights § 23.1389 Position light distribution and intensities. (a) General. The intensities prescribed in this section must be provided by new equipment with each light cover and color filter in place... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Position light distribution and intensities...
14 CFR 23.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Equipment Lights § 23.1389 Position light distribution and intensities. (a) General. The intensities prescribed in this section must be provided by new equipment with each light cover and color filter in place... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Position light distribution and intensities...
Random local temporal structure of category fluency responses.
Meyer, David J; Messer, Jason; Singh, Tanya; Thomas, Peter J; Woyczynski, Wojbor A; Kaye, Jeffrey; Lerner, Alan J
2012-04-01
The Category Fluency Test (CFT) provides a sensitive measurement of cognitive capabilities in humans related to retrieval from semantic memory. In particular, it is widely used to assess progress of cognitive impairment in patients with dementia. Previous research shows that, in the first approximation, the intensity of tested individuals' responses within a standard 60-s test period decays exponentially with time, with faster decay rates for more cognitively impaired patients. Such decay rate can then be viewed as a global (macro) diagnostic parameter of each test. In the present paper we focus on the statistical properties of the properly de-trended time intervals between consecutive responses (inter-call times) in the Category Fluency Test. In a sense, those properties reflect the local (micro) structure of the response generation process. We find that a good approximation for the distribution of the de-trended inter-call times is provided by the Weibull Distribution, a probability distribution that appears naturally in this context as a distribution of a minimum of independent random quantities and is the standard tool in industrial reliability theory. This insight leads us to a new interpretation of the concept of "navigating a semantic space" via patient responses.
Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2014-01-01
Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016
Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas
2013-01-01
Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.
Stylized facts in internal rates of return on stock index and its derivative transactions
NASA Astrophysics Data System (ADS)
Pichl, Lukáš; Kaizoji, Taisei; Yamano, Takuya
2007-08-01
Universal features in stock markets and their derivative markets are studied by means of probability distributions in internal rates of return on buy and sell transaction pairs. Unlike the stylized facts in normalized log returns, the probability distributions for such single asset encounters incorporate the time factor by means of the internal rate of return, defined as the continuous compound interest. Resulting stylized facts are shown in the probability distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100 index close values. The application of the above analysis to minute-tick data of NIKKEI 225 and its futures market, respectively, reveals an interesting difference in the behavior of the two probability distributions, in case a threshold on the minimal duration of the long position is imposed. It is therefore suggested that the probability distributions of the internal rates of return could be used for causality mining between the underlying and derivative stock markets. The highly specific discrete spectrum, which results from noise trader strategies as opposed to the smooth distributions observed for fundamentalist strategies in single encounter transactions may be useful in deducing the type of investment strategy from trading revenues of small portfolio investors.
Probabilistic Reasoning for Robustness in Automated Planning
NASA Technical Reports Server (NTRS)
Schaffer, Steven; Clement, Bradley; Chien, Steve
2007-01-01
A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.
Effects of track and threat information on judgments of hurricane strike probability.
Wu, Hao-Che; Lindell, Michael K; Prater, Carla S; Samuelson, Charles D
2014-06-01
Although evacuation is one of the best strategies for protecting citizens from hurricane threat, the ways that local elected officials use hurricane data in deciding whether to issue hurricane evacuation orders is not well understood. To begin to address this problem, we examined the effects of hurricane track and intensity information in a laboratory setting where participants judged the probability that hypothetical hurricanes with a constant bearing (i.e., straight line forecast track) would make landfall in each of eight 45 degree sectors around the Gulf of Mexico. The results from 162 participants in a student sample showed that the judged strike probability distributions over the eight sectors within each scenario were, unsurprisingly, unimodal and centered on the sector toward which the forecast track pointed. More significantly, although strike probability judgments for the sector in the direction of the forecast track were generally higher than the corresponding judgments for the other sectors, the latter were not zero. Most significantly, there were no appreciable differences in the patterns of strike probability judgments for hurricane tracks represented by a forecast track only, an uncertainty cone only, or forecast track with an uncertainty cone-a result consistent with a recent survey of coastal residents threatened by Hurricane Charley. The study results suggest that people are able to correctly process basic information about hurricane tracks but they do make some errors. More research is needed to understand the sources of these errors and to identify better methods of displaying uncertainty about hurricane parameters. © 2013 Society for Risk Analysis.
Mathematical Model to estimate the wind power using four-parameter Burr distribution
NASA Astrophysics Data System (ADS)
Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu
2018-03-01
When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.
Serra-Sogas, Norma; O'Hara, Patrick D; Canessa, Rosaline; Keller, Peter; Pelot, Ronald
2008-05-01
This paper examines the use of exploratory spatial analysis for identifying hotspots of shipping-based oil pollution in the Pacific Region of Canada's Exclusive Economic Zone. It makes use of data collected from fiscal years 1997/1998 to 2005/2006 by the National Aerial Surveillance Program, the primary tool for monitoring and enforcing the provisions imposed by MARPOL 73/78. First, we present oil spill data as points in a "dot map" relative to coastlines, harbors and the aerial surveillance distribution. Then, we explore the intensity of oil spill events using the Quadrat Count method, and the Kernel Density Estimation methods with both fixed and adaptive bandwidths. We found that oil spill hotspots where more clearly defined using Kernel Density Estimation with an adaptive bandwidth, probably because of the "clustered" distribution of oil spill occurrences. Finally, we discuss the importance of standardizing oil spill data by controlling for surveillance effort to provide a better understanding of the distribution of illegal oil spills, and how these results can ultimately benefit a monitoring program.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Work probability distribution for a ferromagnet with long-ranged and short-ranged correlations
NASA Astrophysics Data System (ADS)
Bhattacharjee, J. K.; Kirkpatrick, T. R.; Sengers, J. V.
2018-04-01
Work fluctuations and work probability distributions are fundamentally different in systems with short-ranged versus long-ranged correlations. Specifically, in systems with long-ranged correlations the work distribution is extraordinarily broad compared to systems with short-ranged correlations. This difference profoundly affects the possible applicability of fluctuation theorems like the Jarzynski fluctuation theorem. The Heisenberg ferromagnet, well below its Curie temperature, is a system with long-ranged correlations in very low magnetic fields due to the presence of Goldstone modes. As the magnetic field is increased the correlations gradually become short ranged. Hence, such a ferromagnet is an ideal system for elucidating the changes of the work probability distribution as one goes from a domain with long-ranged correlations to a domain with short-ranged correlations by tuning the magnetic field. A quantitative analysis of this crossover behavior of the work probability distribution and the associated fluctuations is presented.
NASA Astrophysics Data System (ADS)
Kim, M.; Pangle, L. A.; Cardoso, C.; Lora, M.; Wang, Y.; Harman, C. J.; Troch, P. A. A.
2014-12-01
Transit time distributions (TTD) are an efficient way of characterizing transport through the complex flow dynamics of a hydrologic system, and can serve as a basis for spatially-integrated solute transport modeling. Recently there has been progress in the development of a theory of time-variable TTDs that captures the effect of temporal variability in the timing of fluxes as well as changes in flow pathways. Furthermore, a new formulation of this theory allows the essential transport properties of a system to be parameterized by a physically meaningful time-variable probability distribution, the Ω function. This distribution determines how the age distribution of water in storage is sampled by the outflow. The form of the Ω function varies if the flow pathways change, but is not determined by the timing of fluxes (unlike the TTD). In this study, we use this theory to characterize transport by transient flows through a homogeneously packed 1 m3 sloping soil lysimeter. The transit time distribution associated with each of four irrigation periods (repeated daily for 24 days) are compared to examine the significance of changes in the Ω function due to variations in total storage, antecedent conditions, and precipitation intensity. We observe both the time-variable TTD and the Ω function experimentally by applying the PERTH method (Harman and Kim, 2014, GRL, 41, 1567-1575). The method allows us to observe multiple overlapping time-variable TTD in controlled experiments using only two conservative tracers. We hypothesize that both the TTD and the Ω function will vary in time, even in this small scale, because water will take different flow pathways depending on the initial state of the lysimeter and irrigation intensity. However, based on primarily modeling, we conjecture that major variability in the Ω function will be limited to a period during and immediately after each irrigation. We anticipate the Ω function is almost time-invariant (or scales simply with total storage) during the recession period because flow pathways are stable during this period. This is one of the first experimental studies of this type, and the results offer insights into solute transport in transient, variably-saturated systems.
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2013-11-01
Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.
Measurement of absolute gamma emission probabilities
NASA Astrophysics Data System (ADS)
Sumithrarachchi, Chandana S.; Rengan, Krish; Griffin, Henry C.
2003-06-01
The energies and emission probabilities (intensities) of gamma-rays emitted in radioactive decays of particular nuclides are the most important characteristics by which to quantify mixtures of radionuclides. Often, quantification is limited by uncertainties in measured intensities. A technique was developed to reduce these uncertainties. The method involves obtaining a pure sample of a nuclide using radiochemical techniques, and using appropriate fractions for beta and gamma measurements. The beta emission rates were measured using a liquid scintillation counter, and the gamma emission rates were measured with a high-purity germanium detector. Results were combined to obtain absolute gamma emission probabilities. All sources of uncertainties greater than 0.1% were examined. The method was tested with 38Cl and 88Rb.
Seismic Hazard and Risk Assessments for Beijing-Tianjin-Tangshan, China, Area
Xie, F.; Wang, Z.; Liu, J.
2011-01-01
Seismic hazard and risk in the Beijing-Tianjin-Tangshan, China, area were estimated from 500-year intensity observations. First, we digitized the intensity observations (maps) using ArcGIS with a cell size of 0.1 ?? 0.1??. Second, we performed a statistical analysis on the digitized intensity data, determined an average b value (0.39), and derived the intensity-frequency relationship (hazard curve) for each cell. Finally, based on a Poisson model for earthquake occurrence, we calculated seismic risk in terms of a probability of I ??? 7, 8, or 9 in 50 years. We also calculated the corresponding 10 percent probability of exceedance of these intensities in 50 years. The advantages of assessing seismic hazard and risk from intensity records are that (1) fewer assumptions (i. e., earthquake source and ground motion attenuation) are made, and (2) site-effect is included. Our study shows that the area has high seismic hazard and risk. Our study also suggests that current design peak ground acceleration or intensity for the area may not be adequate. ?? 2010 Birkh??user / Springer Basel AG.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.
1991-01-01
The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Reuter; Ward; Blanckenhorn
1998-12-07
In most previous work on the yellow dung fly Scathophaga stercoraria (L.), as on other species, adaptive explanations have been sought for male behaviour whereas female behaviour has not been examined in similar detail. Here, the arrival of females at the mating site, fresh cattle droppings, is investigated. While almost all males are present shortly after pat deposition females arrive at a low, decreasing rate over an interval of about 5 hours. We propose that the distribution of female arrival times represents a mixed Evolutionarily Stable Strategy (ESS), formed by different trade-offs between costs and benefits of early and late arrival. Early arrival could be favoured by advantages due to better conditions for oviposition, faster egg development of reduced larval competition. Late arrival could be favoured by negative effects on females of male-male competition being weaker later after deposition. Computer simulations with distributions of arrival times deviating from the natural one were performed to "measure" the costs for females arriving at different times. These costs were compared with estimated benefits corresponding to the females' arrival times. This procedure revealed that females coming to the pat later in a population of females arriving shortly after deposition would be favoured. In a population arriving according to a uniform distribution, early females would have fitness advantages. Thus, evolution should lead to an intermediate distribution of arrival times, as in nature, i.e. female arrival behaviour is probably adaptive. The simulations also revealed that the intensity of sexual selection though male-male competition is highest with the natural pattern of female arrival. Therefore, natural selection generating this pattern amplifies the intensity of male-male interaction as a by-product. Copyright 1998 Academic Press
An evaluation of procedures to estimate monthly precipitation probabilities
NASA Astrophysics Data System (ADS)
Legates, David R.
1991-01-01
Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.
1979-01-01
Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Technical Reports Server (NTRS)
Kim, Kyu-Myong; Lau, K. M.; Wu, H. T.; Kim, Maeng-Ki; Cho, Chunho
2012-01-01
The Russia heat wave and wild fires of the summer of 2010 was the most extreme weather event in the history of the country. Studies show that the root cause of the 2010 Russia heat wave/wild fires was an atmospheric blocking event which started to develop at the end of June and peaked around late July and early August. Atmospheric blocking in the summer of 2010 was anomalous in terms of the size, duration, and the location, which shifted to the east from the normal location. This and other similar continental scale severe summertime heat waves and blocking events in recent years have raised the question of whether such events are occurring more frequently and with higher intensity in a warmer climate induced by greenhouse gases. We studied the spatial and temporal distributions of the occurrence and intensity of atmospheric blocking and associated heat waves for northern summer over Eurasia based on CMIPS model simulations. To examine the global warming induced change of atmospheric blocking and heat waves, experiments for a high emissions scenario (RCP8.S) and a medium mitigation scenario (RCP4.S) are compared to the 20th century simulations (historical). Most models simulate the mean distributions of blockings reasonably well, including major blocking centers over Eurasia, northern Pacific, and northern Atlantic. However, the models tend to underestimate the number of blockings compared to MERRA and NCEPIDOE reanalysis, especially in western Siberia. Models also reproduced associated heat waves in terms of the shifting in the probability distribution function of near surface temperature. Seven out of eight models used in this study show that the frequency of atmospheric blocking over the Europe will likely decrease in a warmer climate, but slightly increase over the western Siberia. This spatial pattern resembles the blocking in the summer of 2010, indicating the possibility of more frequent occurrences of heat waves in western Siberia. In this talk, we will also discuss the potential effect of atmosphere-land feedback, particularly how the wetter spring affects the frequency and intensity of atmospheric blocking and heat wave during summer.
Guo, Zhixing; Li, Yijia
2017-01-01
The contribution of factors including fuel type, fire-weather conditions, topography and human activity to fire regime attributes (e.g. fire occurrence, size distribution and severity) has been intensively discussed. The relative importance of those factors in explaining the burn probability (BP), which is critical in terms of fire risk management, has been insufficiently addressed. Focusing on a subtropical coniferous forest with strong human disturbance in East China, our main objective was to evaluate and compare the relative importance of fuel composition, topography, and human activity for fire occurrence, size and BP. Local BP distribution was derived with stochastic fire simulation approach using detailed historical fire data (1990–2010) and forest-resource survey results, based on which our factor contribution analysis was carried out. Our results indicated that fuel composition had the greatest relative importance in explaining fire occurrence and size, but human activity explained most of the variance in BP. This implies that the influence of human activity is amplified through the process of overlapping repeated ignition and spreading events. This result emphasizes the status of strong human disturbance in local fire processes. It further confirms the need for a holistic perspective on factor contribution to fire likelihood, rather than focusing on individual fire regime attributes, for the purpose of fire risk management. PMID:28207837
NASA Astrophysics Data System (ADS)
Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad
2017-10-01
The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.
Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles
2016-01-01
The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Boets, Pieter; Lock, Koen; Goethals, Peter L. M.
2012-06-01
Harbours, which are often characterised by anthropogenic stress in combination with intensive international ship traffic, tend to be very susceptible to aquatic invasions. Since alien macrocrustaceans are known to be very successful across many European waters, a study was made on their distribution and impact in the four Belgian coastal harbours (Nieuwpoort, Ostend, Blankenberge and Zeebrugge). Biological and physical-chemical data were gathered at 43 sampling sites distributed along a salinity gradient in the four harbours. One-fourth of all crustacean species recorded were alien and represented on average 30% of the total macrocrustacean abundance and 65% of the total macrocrustacean biomass. The large share of alien crustaceans in the total macrocrustacean biomass was mainly due to several large alien crab species. Most alien species were found in the oligohaline zone, whereas the number of indigenous species slightly increased with increasing salinity. The low number of indigenous species present at low salinities was probably not only caused by salinity, but also by the lower water quality in this salinity range. Based on the site-specific biocontamination index (SBCI), which was used to assess the ecological water quality, the harbour of Nieuwpoort and Ostend scored best and were classified as good, indicating the limited abundance and the low number of alien macrocrustaceans. Sampling locations situated more inland generally had a higher SBCI and a lower ecological water quality. Zeebrugge and Blankenberge were characterised by a severe biocontamination. For Zeebrugge, this is probably related to the intensive transcontinental commercial ship traffic, whereas for Blankenberge, this could be due to introduction of alien species via recreational crafts or due to its geographical location in the proximity of Zeebrugge. Consistent monitoring of estuarine regions and harbours, which are seen as hotspots for introductions, could help in understanding and predicting the impact of alien species on native biota.
Diurnal variations of ELF transients and background noise in the Schumann resonance band
NASA Astrophysics Data System (ADS)
Greenberg, Eran; Price, Colin
2007-02-01
Schumann resonances (SR) are resonant electromagnetic waves in the Earth-ionosphere cavity, induced primarily by lightning discharges, with a fundamental frequency of about 8 Hz and higher-order modes separated by approximately 6 Hz. The SR are made up of the background signal resulting from global lightning activity and extremely low frequency (ELF) transients resulting from particularly intense lightning discharges somewhere on the planet. Since transients within the Earth-ionosphere cavity due to lightning propagate globally in the ELF range, we can monitor and study global ELF transients from a single station. Data from our Negev Desert (Israel) ELF site are collected using two horizontal magnetic induction coils and a vertical electric field ball antenna, monitored in the 5-40 Hz range with a sampling frequency of 250 Hz. In this paper we present statistics related to the probability distribution of ELF transients and background noise in the time domain and its temporal variations during the day. Our results show that the ELF signal in the time domain follows the normal distribution very well. The σ parameter exhibits three peaks at 0800, 1400, and 2000 UT, which are related to the three main global lightning activity centers in Asia, Africa, and America, respectively. Furthermore, the occurrence of intense ELF events obeys the Poisson distribution, with such intense events occurring every ~10 s, depending on the time of the day. We found that the diurnal changes of the σ parameter are several percent of the mean, while for the number of intense events per minute, the diurnal changes are tens of percent about the mean. We also present the diurnal changes of the SR intensities in the frequency domain as observed at our station. To better understand the diurnal variability of the observations, we simulated the measured ELF background noise using space observations as input, as detected by the Optical Transient Detector (OTD). The most active center which is reflected from both ELF measurements and OTD observations is in Africa. However, the second most active center on the basis of ELF measurements appears to be Asia, while OTD observations show that the American center is more active than the Asian center. These differences are discussed. This paper contributes to our understanding of the origin of the SR by comparing different lightning data sets: background electromagnetic radiation and optical emission observed from space.
Probabilistic reasoning in data analysis.
Sirovich, Lawrence
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Work probability distribution and tossing a biased coin
NASA Astrophysics Data System (ADS)
Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar
2011-01-01
We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.
Profit intensity and cases of non-compliance with the law of demand/supply
NASA Astrophysics Data System (ADS)
Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek
2017-05-01
We consider properties of the measurement intensity ρ of a random variable for which the probability density function represented by the corresponding Wigner function attains negative values on a part of the domain. We consider a simple economic interpretation of this problem. This model is used to present the applicability of the method to the analysis of the negative probability on markets where there are anomalies in the law of supply and demand (e.g. Giffen's goods). It turns out that the new conditions to optimize the intensity ρ require a new strategy. We propose a strategy (so-called à rebours strategy) based on the fixed point method and explore its effectiveness.
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
NASA Astrophysics Data System (ADS)
Wang, Fei
2013-09-01
Geiger-mode detectors have single photon sensitivity and picoseconds timing resolution, which make it a good candidate for low light level ranging applications, especially in the case of flash three dimensional imaging applications where the received laser power is extremely limited. Another advantage of Geiger-mode APD is their capability of large output current which can drive CMOS timing circuit directly, which means that larger format focal plane arrays can be easily fabricated using the mature CMOS technology. However Geiger-mode detector based FPAs can only measure the range information of a scene but not the reflectivity. Reflectivity is a major characteristic which can help target classification and identification. According to Poisson statistic nature, detection probability is tightly connected to the incident number of photon. Employing this relation, a signal intensity estimation method based on probability inversion is proposed. Instead of measuring intensity directly, several detections are conducted, then the detection probability is obtained and the intensity is estimated using this method. The relation between the estimator's accuracy, measuring range and number of detections are discussed based on statistical theory. Finally Monte-Carlo simulation is conducted to verify the correctness of this theory. Using 100 times of detection, signal intensity equal to 4.6 photons per detection can be measured using this method. With slight modification of measuring strategy, intensity information can be obtained using current Geiger-mode detector based FPAs, which can enrich the information acquired and broaden the application field of current technology.
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2014 CFR
2014-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2011 CFR
2011-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
How Can Histograms Be Useful for Introducing Continuous Probability Distributions?
ERIC Educational Resources Information Center
Derouet, Charlotte; Parzysz, Bernard
2016-01-01
The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…
ERIC Educational Resources Information Center
Moses, Tim; Oh, Hyeonjoo J.
2009-01-01
Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…
Planning spatial sampling of the soil from an uncertain reconnaissance variogram
NASA Astrophysics Data System (ADS)
Lark, R. Murray; Hamilton, Elliott M.; Kaninga, Belinda; Maseka, Kakoma K.; Mutondo, Moola; Sakala, Godfrey M.; Watts, Michael J.
2017-12-01
An estimated variogram of a soil property can be used to support a rational choice of sampling intensity for geostatistical mapping. However, it is known that estimated variograms are subject to uncertainty. In this paper we address two practical questions. First, how can we make a robust decision on sampling intensity, given the uncertainty in the variogram? Second, what are the costs incurred in terms of oversampling because of uncertainty in the variogram model used to plan sampling? To achieve this we show how samples of the posterior distribution of variogram parameters, from a computational Bayesian analysis, can be used to characterize the effects of variogram parameter uncertainty on sampling decisions. We show how one can select a sample intensity so that a target value of the kriging variance is not exceeded with some specified probability. This will lead to oversampling, relative to the sampling intensity that would be specified if there were no uncertainty in the variogram parameters. One can estimate the magnitude of this oversampling by treating the tolerable grid spacing for the final sample as a random variable, given the target kriging variance and the posterior sample values. We illustrate these concepts with some data on total uranium content in a relatively sparse sample of soil from agricultural land near mine tailings in the Copperbelt Province of Zambia.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Wilks, Daniel S.
1993-10-01
Performance of 8 three-parameter probability distributions for representing annual extreme and partial duration precipitation data at stations in the northeastern and southeastern United States is investigated. Particular attention is paid to fidelity on the right tail, through use of a bootstrap procedure simulating extrapolation on the right tail beyond the data. It is found that the beta-κ distribution best describes the extreme right tail of annual extreme series, and the beta-P distribution is best for the partial duration data. The conventionally employed two-parameter Gumbel distribution is found to substantially underestimate probabilities associated with the larger precipitation amounts for both annual extreme and partial duration data. Fitting the distributions using left-censored data did not result in improved fits to the right tail.
NASA Astrophysics Data System (ADS)
Campos-García, Manuel; Granados-Agustín, Fermín.; Cornejo-Rodríguez, Alejandro; Estrada-Molina, Amilcar; Avendaño-Alejo, Maximino; Moreno-Oliva, Víctor Iván.
2013-11-01
In order to obtain a clearer interpretation of the Intensity Transport Equation (ITE), in this work, we propose an algorithm to solve it for some particular wavefronts and its corresponding intensity distributions. By simulating intensity distributions in some planes, the ITE is turns into a Poisson equation with Neumann boundary conditions. The Poisson equation is solved by means of the iterative algorithm SOR (Simultaneous Over-Relaxation).
Analyzing phenological extreme events over the past five decades in Germany
NASA Astrophysics Data System (ADS)
Schleip, Christoph; Menzel, Annette; Estrella, Nicole; Graeser, Philipp
2010-05-01
As climate change may alter the frequency and intensity of extreme temperatures, we analysed whether warming of the last 5 decades has already changed the statistics of phenological extreme events. In this context, two extreme value statistical concepts are discussed and applied to existing phenological datasets of German Weather Service (DWD) in order to derive probabilities of occurrence for extreme early or late phenological events. We analyse four phenological groups; "begin of flowering, "leaf foliation", "fruit ripening" and "leaf colouring" as well as DWD indicator phases of the "phenological year". Additionally we put an emphasis on a between-species analysis; a comparison of differences in extreme onsets between three common northern conifers. Furthermore we conducted a within-species analysis with different phases of horse chestnut throughout a year. The first statistical approach fits data to a Gaussian model using traditional statistical techniques, and then analyses the extreme quantile. The key point of this approach is the adoption of an appropriate probability density function (PDF) to the observed data and the assessment of the PDF parameters change in time. The full analytical description in terms of the estimated PDF for defined time steps of the observation period allows probability assessments of extreme values for e.g. annual or decadal time steps. Related with this approach is the possibility of counting out the onsets which fall in our defined extreme percentiles. The estimation of the probability of extreme events on the basis of the whole data set is in contrast to analyses with the generalized extreme value distribution (GEV). The second approach deals with the extreme PDFs itself and fits the GEV distribution to annual minima of phenological series to provide useful estimates about return levels. For flowering and leaf unfolding phases exceptionally early extremes are seen since the mid 1980s and especially for the single years 1961, 1990 and 2007 whereas exceptionally extreme late events are seen in the year 1970. Summer phases such as fruit ripening exhibit stronger shifts to early extremes than spring phases. Leaf colouring phases reveal increasing probability for late extremes. The with GEV estimated 100-year event of Picea, Pinus and Larix amount to extreme early events of about -27, -31.48 and -32.79 days, respectively. If we assume non-stationary minimum data we get a more extreme 100-year event of about -35.40 for Picea but associated with wider confidence intervals. The GEV is simply another probability distribution but for purposes of extreme analysis in phenology it should be considered as equally important as (if not more important than) the Gaussian PDF approach.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
NASA Astrophysics Data System (ADS)
Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao
2018-06-01
This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.
On the inequivalence of the CH and CHSH inequalities due to finite statistics
NASA Astrophysics Data System (ADS)
Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.
2017-06-01
Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.
Precise Determination of the Intensity of 226Ra Alpha Decay to the 186 keV Excited State
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.P. LaMont; R.J. Gehrke; S.E. Glover
There is a significant discrepancy in the reported values for the emission probability of the 186 keV gamma-ray resulting from the alpha decay of 226 Ra to 186 keV excited state of 222 Rn. Published values fall in the range of 3.28 to 3.59 gamma-rays per 100 alpha-decays. An interesting observation is that the lower value, 3.28, is based on measuring the 186 keV gamma-ray intensity relative to the 226 Ra alpha-branch to the 186 keV level. The higher values, which are close to 3.59, are based on measuring the gamma-ray intensity from mass standards of 226 Ra that aremore » traceable to the mass standards prepared by HÓNIGSCHMID in the early 1930''s. This discrepancy was resolved in this work by carefully measuring the 226 Ra alpha-branch intensities, then applying the theoretical E2 multipolarity internal conversion coefficient of 0.692±0.007 to calculate the 186 keV gamma-ray emission probability. The measured value for the alpha branch to the 186 keV excited state was (6.16±0.03)%, which gives a 186 keV gamma-ray emission probability of (3.64±0.04)%. This value is in excellent agreement with the most recently reported 186 keV gamma-ray emission probabilities determined using 226 Ra mass standards.« less
NASA Astrophysics Data System (ADS)
Candela, A.; Brigandì, G.; Aronica, G. T.
2014-07-01
In this paper a procedure to derive synthetic flood design hydrographs (SFDH) using a bivariate representation of rainfall forcing (rainfall duration and intensity) via copulas, which describes and models the correlation between two variables independently of the marginal laws involved, coupled with a distributed rainfall-runoff model, is presented. Rainfall-runoff modelling (R-R modelling) for estimating the hydrological response at the outlet of a catchment was performed by using a conceptual fully distributed procedure based on the Soil Conservation Service - Curve Number method as an excess rainfall model and on a distributed unit hydrograph with climatic dependencies for the flow routing. Travel time computation, based on the distributed unit hydrograph definition, was performed by implementing a procedure based on flow paths, determined from a digital elevation model (DEM) and roughness parameters obtained from distributed geographical information. In order to estimate the primary return period of the SFDH, which provides the probability of occurrence of a hydrograph flood, peaks and flow volumes obtained through R-R modelling were treated statistically using copulas. Finally, the shapes of hydrographs have been generated on the basis of historically significant flood events, via cluster analysis. An application of the procedure described above has been carried out and results presented for the case study of the Imera catchment in Sicily, Italy.
Confidence as Bayesian Probability: From Neural Origins to Behavior.
Meyniel, Florent; Sigman, Mariano; Mainen, Zachary F
2015-10-07
Research on confidence spreads across several sub-fields of psychology and neuroscience. Here, we explore how a definition of confidence as Bayesian probability can unify these viewpoints. This computational view entails that there are distinct forms in which confidence is represented and used in the brain, including distributional confidence, pertaining to neural representations of probability distributions, and summary confidence, pertaining to scalar summaries of those distributions. Summary confidence is, normatively, derived or "read out" from distributional confidence. Neural implementations of readout will trade off optimality versus flexibility of routing across brain systems, allowing confidence to serve diverse cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Measurements of gas hydrate formation probability distributions on a quasi-free water droplet
NASA Astrophysics Data System (ADS)
Maeda, Nobuo
2014-06-01
A High Pressure Automated Lag Time Apparatus (HP-ALTA) can measure gas hydrate formation probability distributions from water in a glass sample cell. In an HP-ALTA gas hydrate formation originates near the edges of the sample cell and gas hydrate films subsequently grow across the water-guest gas interface. It would ideally be desirable to be able to measure gas hydrate formation probability distributions of a single water droplet or mist that is freely levitating in a guest gas, but this is technically challenging. The next best option is to let a water droplet sit on top of a denser, immiscible, inert, and wall-wetting hydrophobic liquid to avoid contact of a water droplet with the solid walls. Here we report the development of a second generation HP-ALTA which can measure gas hydrate formation probability distributions of a water droplet which sits on a perfluorocarbon oil in a container that is coated with 1H,1H,2H,2H-Perfluorodecyltriethoxysilane. It was found that the gas hydrate formation probability distributions of such a quasi-free water droplet were significantly lower than those of water in a glass sample cell.
A New Random Walk for Replica Detection in WSNs.
Aalsalem, Mohammed Y; Khan, Wazir Zada; Saad, N M; Hossain, Md Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram
2016-01-01
Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical.
A New Random Walk for Replica Detection in WSNs
Aalsalem, Mohammed Y.; Saad, N. M.; Hossain, Md. Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram
2016-01-01
Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical. PMID:27409082
NASA Astrophysics Data System (ADS)
Sarantopoulou, E.; Gomoiu, I.; Kollia, Z.; Cefalas, A. C.
2011-01-01
This work is a part of ESA/EU SURE project aiming to quantify the survival probability of fungal spores in space under solar irradiation in the vacuum ultraviolet (VUV) (110-180 nm) spectral region. The contribution and impact of VUV photons, vacuum, low temperature and their synergies on the survival probability of Aspergillus terreus spores is measured at simulated space conditions on Earth. To simulate the solar VUV irradiation, the spores are irradiated with a continuous discharge VUV hydrogen photon source and a molecular fluorine laser, at low and high photon intensities at 10 15 photon m -2 s -1 and 3.9×10 27 photons pulse -1 m -2 s -1, respectively. The survival probability of spores is independent from the intensity and the fluence of photons, within certain limits, in agreement with previous studies. The spores are shielded from a thin carbon layer, which is formed quickly on the external surface of the proteinaceous membrane at higher photon intensities at the start of the VUV irradiation. Extrapolating the results in space conditions, for an interplanetary direct transfer orbit from Mars to Earth, the spores will be irradiated with 3.3×10 21 solar VUV photons m -2. This photon fluence is equivalent to the irradiation of spores on Earth with 54 laser pulses with an experimental ˜92% survival probability, disregarding the contribution of space vacuum and low temperature, or to continuous solar VUV irradiation for 38 days in space near the Earth with an extrapolated ˜61% survival probability. The experimental results indicate that the damage of spores is mainly from the dehydration stress in vacuum. The high survival probability after 4 days in vacuum (˜34%) is due to the exudation of proteins on the external membrane, thus preventing further dehydration of spores. In addition, the survival probability is increasing to ˜54% at 10 K with 0.12 K/s cooling and heating rates.
Measurement of Upconversion in Er:YAG via Z-scan
2011-10-01
4I9=2) is used to fit the data (Fig. 1). Included in the model is a rate equation for the population of each mani- fold, Ni, where i runs from 1 to 4...þW31Þ; ð3Þ dN4 dt ¼ CupN22 − N4ðW43 þW42 þW41Þ: ð4Þ Here, I is the laser intensity, σ is the absolute cross section, Wij is the relaxation rate from...level i to level j, Cup is the up- conversion coefficient, and f aðf eÞ is the probability given by the Boltzmann distribution that an ion in the 4I15=2
Excitation of the Werner bands of H2 by electron impact
NASA Technical Reports Server (NTRS)
Stone, E. J.; Zipf, E. C.
1972-01-01
Absolute cross sections for the excitation of the H2 Werner band system were measured from energy threshold to 300 eV for electron impact on H2. The bands were observed in emission in the wavelength region 1100A to 1250A. The measured cross sections were compared with published transition probabilities, leading to the conclusion that the Werner bands are suitable as the basis for a relative spectral response calibration only when the bands are observed under sufficiently high resolution. The effect of the perturbation between the C 1Pi u and B 1 Sigma-u states of the hydrogen molecule was clearly observed in anomalies in the rotational intensity distribution in bands of the (3 v '') progression.
Sonnenborn, Ulrich
2016-10-01
Among the gram-negative microorganisms with probiotic properties, Escherichia coli strain Nissle 1917 (briefly EcN) is probably the most intensively investigated bacterial strain today. Since nearly 100 years, the EcN strain is used as the active pharmaceutical ingredient in a licensed medicinal product that is distributed in Germany and several other countries. Over the last few decades, novel probiotic activities have been detected, which taken together are specific of this versatile E. coli strain. This review gives a short overview on the discovery and history of the EcN strain. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The light-footed clapper rail: An update
Wilburn, Cynthia A.; Jorgensen, Paul D.; Massey, Barbara W.; Basham, Venita A.
1979-01-01
A preliminary report on the Light-footed Clapper Rail, Rallus longirostris levipes, estimated a population of 500-750 birds in California (Wilbur, Am. Birds 28:868-870, 1974). Since then, additional work has been accomplished, most notably: (1) an intensive study of Carpinteria Marsh, Santa Barbara County, 1976-1977 (Basham); (2) a series of winter high tide counts at Anaheim Bay, Orange County, 1975-1977 (Massey, C. Collins, J. Lindell, M. Silbernagle); and (3) a detailed investigation of the rail population of Tijuana Slough, San Diego County, 1973-1974 (Jorgensen). These, plus short-term studies by the authors and K. Bender, D. Pinkler, P. Johns, and S. Lockhart, have shown that the original estimate was unrealistic. A more probable winter total is 300 rails, distributed as described below.
Approximate supernova remnant dynamics with cosmic ray production
NASA Technical Reports Server (NTRS)
Voelk, H. J.; Drury, L. O.; Dorfi, E. A.
1985-01-01
Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.
Fragment size distribution in viscous bag breakup of a drop
NASA Astrophysics Data System (ADS)
Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.
2015-11-01
In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 <= We <= 16 for Oh <= 0.1. Experiments are conducted using phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.
Spatial Interpolation of Rain-field Dynamic Time-Space Evolution in Hong Kong
NASA Astrophysics Data System (ADS)
Liu, P.; Tung, Y. K.
2017-12-01
Accurate and reliable measurement and prediction of spatial and temporal distribution of rain-field over a wide range of scales are important topics in hydrologic investigations. In this study, geostatistical treatment of precipitation field is adopted. To estimate the rainfall intensity over a study domain with the sample values and the spatial structure from the radar data, the cumulative distribution functions (CDFs) at all unsampled locations were estimated. Indicator Kriging (IK) was used to estimate the exceedance probabilities for different pre-selected cutoff levels and a procedure was implemented for interpolating CDF values between the thresholds that were derived from the IK. Different interpolation schemes of the CDF were proposed and their influences on the performance were also investigated. The performance measures and visual comparison between the observed rain-field and the IK-based estimation suggested that the proposed method can provide fine results of estimation of indicator variables and is capable of producing realistic image.
Photon-number statistics in resonance fluorescence
NASA Astrophysics Data System (ADS)
Lenstra, D.
1982-12-01
The theory of photon-number statistics in resonance fluorescence is treated, starting with the general formula for the emission probability of n photons during a given time interval T. The results fully confirm formerly obtained results by Cook that were based on the theory of atomic motion in a traveling wave. General expressions for the factorial moments are derived and explicit results for the mean and the variance are given. It is explicitly shown that the distribution function tends to a Gaussian when T becomes much larger than the natural lifetime of the excited atom. The speed of convergence towards the Gaussian is found to be typically slow, that is, the third normalized central moment (or the skewness) is proportional to T-12. However, numerical results illustrate that the overall features of the distribution function are already well represented by a Gaussian when T is larger than a few natural lifetimes only, at least if the intensity of the exciting field is not too small and its detuning is not too large.
Miki, Shigehito; Yamashita, Taro; Wang, Zhen; Terai, Hirotaka
2014-04-07
We present the characterization of two-dimensionally arranged 64-pixel NbTiN superconducting nanowire single-photon detector (SSPD) array for spatially resolved photon detection. NbTiN films deposited on thermally oxidized Si substrates enabled the high-yield production of high-quality SSPD pixels, and all 64 SSPD pixels showed uniform superconducting characteristics within the small range of 7.19-7.23 K of superconducting transition temperature and 15.8-17.8 μA of superconducting switching current. Furthermore, all of the pixels showed single-photon sensitivity, and 60 of the 64 pixels showed a pulse generation probability higher than 90% after photon absorption. As a result of light irradiation from the single-mode optical fiber at different distances between the fiber tip and the active area, the variations of system detection efficiency (SDE) in each pixel showed reasonable Gaussian distribution to represent the spatial distributions of photon flux intensity.
Damaso, Clarissa R
2018-02-01
In 1796, Edward Jenner developed the smallpox vaccine consisting of pustular material obtained from lesions on cows affected by so-called cow-pox. The disease, caused by cowpox virus, confers crossprotection against smallpox. However, historical evidence suggests that Jenner might have used vaccinia virus or even horsepox virus instead of cowpox virus. Mysteries surrounding the origin and nature of the smallpox vaccine persisted during the 19th century, a period of intense exchange of vaccine strains, including the Beaugency lymph. This lymph was obtained from spontaneous cases of cow-pox in France in 1866 and then distributed worldwide. A detailed Historical Review of the distribution of the Beaugency lymph supports recent genetic analyses of extant vaccine strains, suggesting the lymph was probably a vaccinia strain or a horsepox-like virus. This Review is a historical investigation that revisits the mysteries of the smallpox vaccine and reveals an intricate evolutionary relationship of extant vaccinia strains. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Assessment of biological corrosion of ferroconcrete of ground-based industrial structures].
Rozhanskaia, A M; Piliashenko-Novokhatnyĭ, A I; Purish, L M; Durcheva, V N; Kozlova, I A
2001-01-01
One of the objects of a nuclear plant built in 1983 and put in 15-years long dead storage with the purpose to estimate the degree of contamination by rust-hazardous microorganisms of ferroconcrete structures and to predict their biocorrosion state after putting in operation was a subject of microbiological investigation. The everywhere distribution of sulphur cycle bacteria (thionic and sulphate-reducing bacteria) on the surface and in the bulk of concrete structures, their confineness to corrosion products of concrete and bars of the investigated building have been shown. It has been demonstrated that sulphate-reducing bacteria were the most distributed group in all the sampling points. An indirect estimation of participation degree of the microbial communities in the processes of ferroconcrete biological damages has been carried out as based on the accumulation intensity of aggressive gaseous metabolites--carbon dioxide and hydrogen. Probability of deterioration of biocorrosion situation under the full-scale operation of the object has been substantiated.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W.
An account is given of the method used to quantify the risks accruing to the use of a radioisotope thermoelectric generator fueled by Pu-238 dioxide aboard the Space Shuttle-launched Ulysses mission. After using a Monte Carlo technique to develop probability distributions for the radiological consequences of a range of accident scenarios throughout the mission, factors affecting those consequences are identified in conjunction with their probability distributions. The functional relationship among all the factors is then established, and probability distributions for all factor effects are combined by means of a Monte Carlo technique.
Score distributions of gapped multiple sequence alignments down to the low-probability tail
NASA Astrophysics Data System (ADS)
Fieth, Pascal; Hartmann, Alexander K.
2016-08-01
Assessing the significance of alignment scores of optimally aligned DNA or amino acid sequences can be achieved via the knowledge of the score distribution of random sequences. But this requires obtaining the distribution in the biologically relevant high-scoring region, where the probabilities are exponentially small. For gapless local alignments of infinitely long sequences this distribution is known analytically to follow a Gumbel distribution. Distributions for gapped local alignments and global alignments of finite lengths can only be obtained numerically. To obtain result for the small-probability region, specific statistical mechanics-based rare-event algorithms can be applied. In previous studies, this was achieved for pairwise alignments. They showed that, contrary to results from previous simple sampling studies, strong deviations from the Gumbel distribution occur in case of finite sequence lengths. Here we extend the studies to multiple sequence alignments with gaps, which are much more relevant for practical applications in molecular biology. We study the distributions of scores over a large range of the support, reaching probabilities as small as 10-160, for global and local (sum-of-pair scores) multiple alignments. We find that even after suitable rescaling, eliminating the sequence-length dependence, the distributions for multiple alignment differ from the pairwise alignment case. Furthermore, we also show that the previously discussed Gaussian correction to the Gumbel distribution needs to be refined, also for the case of pairwise alignments.
Site occupancy models with heterogeneous detection probabilities
Royle, J. Andrew
2006-01-01
Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saloman, Edward B.; Kramida, Alexander
2017-08-01
The energy levels, observed spectral lines, and transition probabilities of the neutral vanadium atom, V i, have been compiled. Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factors and leading percentage compositions for the levels are included where available, as well as wavelengths calculated from the energy levels (Ritz wavelengths). Wavelengths are reported for 3985 transitions, and 549 energy levels are determined. The observed relative intensities normalized to a common scale are provided.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
General formulation of long-range degree correlations in complex networks
NASA Astrophysics Data System (ADS)
Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke
2018-06-01
We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Wang, Jihan; Yang, Kai
2014-07-01
An efficient operating room needs both little underutilised and overutilised time to achieve optimal cost efficiency. The probabilities of underrun and overrun of lists of cases can be estimated by a well defined duration distribution of the lists. To propose a method of predicting the probabilities of underrun and overrun of lists of cases using Type IV Pearson distribution to support case scheduling. Six years of data were collected. The first 5 years of data were used to fit distributions and estimate parameters. The data from the last year were used as testing data to validate the proposed methods. The percentiles of the duration distribution of lists of cases were calculated by Type IV Pearson distribution and t-distribution. Monte Carlo simulation was conducted to verify the accuracy of percentiles defined by the proposed methods. Operating rooms in John D. Dingell VA Medical Center, United States, from January 2005 to December 2011. Differences between the proportion of lists of cases that were completed within the percentiles of the proposed duration distribution of the lists and the corresponding percentiles. Compared with the t-distribution, the proposed new distribution is 8.31% (0.38) more accurate on average and 14.16% (0.19) more accurate in calculating the probabilities at the 10th and 90th percentiles of the distribution, which is a major concern of operating room schedulers. The absolute deviations between the percentiles defined by Type IV Pearson distribution and those from Monte Carlo simulation varied from 0.20 min (0.01) to 0.43 min (0.03). Operating room schedulers can rely on the most recent 10 cases with the same combination of surgeon and procedure(s) for distribution parameter estimation to plan lists of cases. Values are mean (SEM). The proposed Type IV Pearson distribution is more accurate than t-distribution to estimate the probabilities of underrun and overrun of lists of cases. However, as not all the individual case durations followed log-normal distributions, there was some deviation from the true duration distribution of the lists.
NASA Astrophysics Data System (ADS)
Freire, Sérgio; Aubrecht, Christoph
2010-05-01
The recent 7.0 M earthquake that caused severe damage and destruction in parts of Haiti struck close to 5 PM (local time), at a moment when many people were not in their residences, instead being in their workplaces, schools, or churches. Community vulnerability assessment to seismic hazard relying solely on the location and density of resident-based census population, as is commonly the case, would grossly misrepresent the real situation. In particular in the context of global (climate) change, risk analysis is a research field increasingly gaining in importance whereas risk is usually defined as a function of hazard probability and vulnerability. Assessment and mapping of human vulnerability has however generally been lagging behind hazard analysis efforts. Central to the concept of vulnerability is the issue of human exposure. Analysis of exposure is often spatially tied to administrative units or reference objects such as buildings, spanning scales from the regional level to local studies for small areas. Due to human activities and mobility, the spatial distribution of population is time-dependent, especially in metropolitan areas. Accurately estimating population exposure is a key component of catastrophe loss modeling, one element of effective risk analysis and emergency management. Therefore, accounting for the spatio-temporal dynamics of human vulnerability correlates with recent recommendations to improve vulnerability analyses. Earthquakes are the prototype for a major disaster, being low-probability, rapid-onset, high-consequence events. Lisbon, Portugal, is subject to a high risk of earthquake, which can strike at any day and time, as confirmed by modern history (e.g. December 2009). The recently-approved Special Emergency and Civil Protection Plan (PEERS) is based on a Seismic Intensity map, and only contemplates resident population from the census as proxy for human exposure. In the present work we map and analyze the spatio-temporal distribution of population in the daily cycle to re-assess exposure to earthquake hazard in the Lisbon Metropolitan Area, home to almost three million people. New high-resolution (50 m grids) daytime and nighttime population distribution maps are developed using dasymetric mapping. The modeling approach uses areal interpolation to combine best-available census data and statistics with land use and land cover data. Mobility statistics are considered for mapping daytime distribution, and empirical parameters used for interpolation are obtained from a previous effort in high resolution population mapping of part of the study area. Finally, the population distribution maps are combined with the Seismic Hazard Intensity map to: (1) quantify and compare human exposure to seismic intensity levels in the daytime and nighttime periods, and (2) derive nighttime and daytime overall Earthquake Risk maps. This novel approach yields previously unavailable spatio-temporal population distribution information for the study area, enabling refined and more accurate earthquake risk mapping and assessment. Additionally, such population exposure datasets can be combined with different hazard maps to improve spatio-temporal assessment and risk mapping for any type of hazard, natural or man-made. We believe this improved characterization of vulnerability and risk can benefit all phases of the disaster management process where human exposure has to be considered, namely in emergency planning, risk mitigation, preparedness, and response to an event.
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
Quantum key distribution without the wavefunction
NASA Astrophysics Data System (ADS)
Niestegge, Gerd
A well-known feature of quantum mechanics is the secure exchange of secret bit strings which can then be used as keys to encrypt messages transmitted over any classical communication channel. It is demonstrated that this quantum key distribution allows a much more general and abstract access than commonly thought. The results include some generalizations of the Hilbert space version of quantum key distribution, but are based upon a general nonclassical extension of conditional probability. A special state-independent conditional probability is identified as origin of the superior security of quantum key distribution; this is a purely algebraic property of the quantum logic and represents the transition probability between the outcomes of two consecutive quantum measurements.
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.
2002-01-01
The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Does Litter Size Variation Affect Models of Terrestrial Carnivore Extinction Risk and Management?
Devenish-Nelson, Eleanor S.; Stephens, Philip A.; Harris, Stephen; Soulsbury, Carl; Richards, Shane A.
2013-01-01
Background Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. Methodology/Principal Findings We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species – the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. Conclusion/Significance These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes. PMID:23469140
Does litter size variation affect models of terrestrial carnivore extinction risk and management?
Devenish-Nelson, Eleanor S; Stephens, Philip A; Harris, Stephen; Soulsbury, Carl; Richards, Shane A
2013-01-01
Individual variation in both survival and reproduction has the potential to influence extinction risk. Especially for rare or threatened species, reliable population models should adequately incorporate demographic uncertainty. Here, we focus on an important form of demographic stochasticity: variation in litter sizes. We use terrestrial carnivores as an example taxon, as they are frequently threatened or of economic importance. Since data on intraspecific litter size variation are often sparse, it is unclear what probability distribution should be used to describe the pattern of litter size variation for multiparous carnivores. We used litter size data on 32 terrestrial carnivore species to test the fit of 12 probability distributions. The influence of these distributions on quasi-extinction probabilities and the probability of successful disease control was then examined for three canid species - the island fox Urocyon littoralis, the red fox Vulpes vulpes, and the African wild dog Lycaon pictus. Best fitting probability distributions differed among the carnivores examined. However, the discretised normal distribution provided the best fit for the majority of species, because variation among litter-sizes was often small. Importantly, however, the outcomes of demographic models were generally robust to the distribution used. These results provide reassurance for those using demographic modelling for the management of less studied carnivores in which litter size variation is estimated using data from species with similar reproductive attributes.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
Comparative analysis through probability distributions of a data set
NASA Astrophysics Data System (ADS)
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
On the theory of intensity distributions of tornadoes and other low pressure systems
NASA Astrophysics Data System (ADS)
Schielicke, Lisa; Névir, Peter
Approaching from a theoretical point of view, this work presents a theory which unifies intensity distributions of different low pressure systems, based on an energy of displacement. Resulting from a generalized Boltzmann distribution, the expression of this energy of displacement is obtained by radial integration over the forces which are in balance with the pressure gradient force in the horizontal equation of motion. A scale analysis helps to find out which balance of forces prevail. According to the prevailing balances, the expression of the energy of displacement differs for various depressions. Investigating the system at the moment of maximum intensity, the energy of displacement can be interpreted as the work that has to be done to generate and finally eliminate the pressure anomaly, respectively. By choosing the appropriate balance of forces, number-intensity (energy of displacement) distributions show exponential behavior with the same decay rate β for tornadoes and cyclones, if tropical and extra-tropical cyclones are investigated together. The decay rate is related to a characteristic (universal) scale of the energy of displacement which has approximately the value Eu = β- 1 ≈ 1000 m 2s - 2 . In consequence, while the different balances of forces cause the scales of velocity, the energy of displacement scale seems to be universal for all low pressure systems. Additionally, if intensity is expressed as lifetime minimum pressure, the number-intensity (pressure) distributions should be power law distributed. Moreover, this work points out that the choice of the physical quantity which represents the intensity is important concerning the behavior of intensity distributions. Various expressions of the intensity like velocity, kinetic energy, energy of displacement and pressure are possible, but lead to different behavior of the distributions.
Dignam, Jade; Copland, David; McKinnon, Eril; Burfein, Penni; O'Brien, Kate; Farrell, Anna; Rodriguez, Amy D
2015-08-01
Most studies comparing different levels of aphasia treatment intensity have not controlled the dosage of therapy provided. Consequently, the true effect of treatment intensity in aphasia rehabilitation remains unknown. Aphasia Language Impairment and Functioning Therapy is an intensive, comprehensive aphasia program. We investigated the efficacy of a dosage-controlled trial of Aphasia Language Impairment and Functioning Therapy, when delivered in an intensive versus distributed therapy schedule, on communication outcomes in participants with chronic aphasia. Thirty-four adults with chronic, poststroke aphasia were recruited to participate in an intensive (n=16; 16 hours per week; 3 weeks) versus distributed (n=18; 6 hours per week; 8 weeks) therapy program. Treatment included 48 hours of impairment, functional, computer, and group-based aphasia therapy. Distributed therapy resulted in significantly greater improvements on the Boston Naming Test when compared with intensive therapy immediately post therapy (P=0.04) and at 1-month follow-up (P=0.002). We found comparable gains on measures of participants' communicative effectiveness, communication confidence, and communication-related quality of life for the intensive and distributed treatment conditions at post-therapy and 1-month follow-up. Aphasia Language Impairment and Functioning Therapy resulted in superior clinical outcomes on measures of language impairment when delivered in a distributed versus intensive schedule. The therapy progam had a positive effect on participants' functional communication and communication-related quality of life, regardless of treatment intensity. These findings contribute to our understanding of the effect of treatment intensity in aphasia rehabilitation and have important clinical implications for service delivery models. © 2015 American Heart Association, Inc.
Impact of temporal probability in 4D dose calculation for lung tumors.
Rouabhi, Ouided; Ma, Mingyu; Bayouth, John; Xia, Junyi
2015-11-08
The purpose of this study was to evaluate the dosimetric uncertainty in 4D dose calculation using three temporal probability distributions: uniform distribution, sinusoidal distribution, and patient-specific distribution derived from the patient respiratory trace. Temporal probability, defined as the fraction of time a patient spends in each respiratory amplitude, was evaluated in nine lung cancer patients. Four-dimensional computed tomography (4D CT), along with deformable image registration, was used to compute 4D dose incorporating the patient's respiratory motion. First, the dose of each of 10 phase CTs was computed using the same planning parameters as those used in 3D treatment planning based on the breath-hold CT. Next, deformable image registration was used to deform the dose of each phase CT to the breath-hold CT using the deformation map between the phase CT and the breath-hold CT. Finally, the 4D dose was computed by summing the deformed phase doses using their corresponding temporal probabilities. In this study, 4D dose calculated from the patient-specific temporal probability distribution was used as the ground truth. The dosimetric evaluation matrix included: 1) 3D gamma analysis, 2) mean tumor dose (MTD), 3) mean lung dose (MLD), and 4) lung V20. For seven out of nine patients, both uniform and sinusoidal temporal probability dose distributions were found to have an average gamma passing rate > 95% for both the lung and PTV regions. Compared with 4D dose calculated using the patient respiratory trace, doses using uniform and sinusoidal distribution showed a percentage difference on average of -0.1% ± 0.6% and -0.2% ± 0.4% in MTD, -0.2% ± 1.9% and -0.2% ± 1.3% in MLD, 0.09% ± 2.8% and -0.07% ± 1.8% in lung V20, -0.1% ± 2.0% and 0.08% ± 1.34% in lung V10, 0.47% ± 1.8% and 0.19% ± 1.3% in lung V5, respectively. We concluded that four-dimensional dose computed using either a uniform or sinusoidal temporal probability distribution can approximate four-dimensional dose computed using the patient-specific respiratory trace.
Intensity Based Seismic Hazard Map of Republic of Macedonia
NASA Astrophysics Data System (ADS)
Dojcinovski, Dragi; Dimiskovska, Biserka; Stojmanovska, Marta
2016-04-01
The territory of the Republic of Macedonia and the border terrains are among the most seismically active parts of the Balkan Peninsula belonging to the Mediterranean-Trans-Asian seismic belt. The seismological data on the R. Macedonia from the past 16 centuries point to occurrence of very strong catastrophic earthquakes. The hypocenters of the occurred earthquakes are located above the Mohorovicic discontinuity, most frequently, at a depth of 10-20 km. Accurate short -term prognosis of earthquake occurrence, i.e., simultaneous prognosis of time, place and intensity of their occurrence is still not possible. The present methods of seismic zoning have advanced to such an extent that it is with a great probability that they enable efficient protection against earthquake effects. The seismic hazard maps of the Republic of Macedonia are the result of analysis and synthesis of data from seismological, seismotectonic and other corresponding investigations necessary for definition of the expected level of seismic hazard for certain time periods. These should be amended, from time to time, with new data and scientific knowledge. The elaboration of this map does not completely solve all issues related to earthquakes, but it provides basic empirical data necessary for updating the existing regulations for construction of engineering structures in seismically active areas regulated by legal regulations and technical norms whose constituent part is the seismic hazard map. The map has been elaborated based on complex seismological and geophysical investigations of the considered area and synthesis of the results from these investigations. There were two phases of elaboration of the map. In the first phase, the map of focal zones characterized by maximum magnitudes of possible earthquakes has been elaborated. In the second phase, the intensities of expected earthquakes have been computed according to the MCS scale. The map is prognostic, i.e., it provides assessment of the probability for occurrence of future earthquakes with a defined area distribution of their seismic intensity, depending on the natural characteristics of the terrain. The period of 10.000 years represents the greatest expected seismic threat for the considered area. From the aspect of low-cost construction, it is also necessary to know the seismicity in shorter time periods, as well. Therefore, maps for return time periods of 50, 100, 200, 500 and 1000 years have also been elaborated. The maps show a probability of 63% for occurrence of expected earthquakes with maximum intensities expressed on the MCS scale. The map has been elaborated to the scale of 1: 1.000.000, while the obtained isolines of seismic intensity are drawn with an error of 5 km. The seismic hazard map of R. Macedonia is used for: • The needs of the Rulebook on Technical Norms on Construction of Structures in Seismic Areas and for the needs of physical and urban planning and design. • While defining the seismic design parameters for construction of structures in zones with intensity of I VII degrees MSK, investigations should be done for detailed seismic zoning and microzoning of the terrain of these zones in compliance with the technical regulations for construction in seismically prone areas. • The areas on the map indicated by intensity X MCS are not regulated by the valid regulations. Therefore, in practice, these should be treated as such in which it is not possible to construct any structures without previous surveys. • Revision of this map is done at a five year period, i.e., after each occurred earthquake whose parameters are such that require modifications and amendments of the map.
NASA Astrophysics Data System (ADS)
Xu, Jinghai; An, Jiwen; Nie, Gaozong
2016-04-01
Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.
NASA Astrophysics Data System (ADS)
Nieradzik, L. P.; Haverd, V. E.; Briggs, P.; Meyer, C. P.; Canadell, J.
2015-12-01
Fires play a major role in the carbon-cycle and the development of global vegetation, especially on the continent of Australia, where vegetation is prone to frequent fire occurences and where regional composition and stand-age distribution is regulated by fire. Furthermore, the probable changes of fire behaviour under a changing climate are still poorly understood and require further investigation.In this presentation we introduce the fire-model BLAZE (BLAZe induced land-atmosphere flux Estimator), designed for a novel approach to simulate fire-frequencies, fire-intensities, fire related fluxes and the responses in vegetation. Fire frequencies are prescribed using SIMFIRE (Knorr et al., 2014) or GFED3 (e.g. Giglio et al., 2013). Fire-Line-Intensity (FLI) is computed from meteorological information and fuel loads which are state variables within the C-cycle component of CABLE (Community Atmosphere-Biosphere-Land Exchange model). This FLI is used as an input to the tree-demography model POP(Population-Order-Physiology; Haverd et al., 2014). Within POP the fire-mortality depends on FLI and tree height distribution. Intensity-dependent combustion factors (CF) are then generated for and applied to live and litter carbon pools as well as the transfers from live pools to litter caused by fire. Thus, both fire and stand characteristics are taken into account which has a legacy effect on future events. Gross C-CO2 emissions from Australian wild fires are larger than Australian territorial fossil fuel emissions. However, the net effect of fire on the Australian terrestrial carbon budget is unknown. We address this by applying the newly-developed fire module, integrated within the CABLE land surface model, and optimised for the Australian region, to a reassessment of the Australian Terrestrial Carbon Budget.
Hao, Qing; Sun, Yu-Xin; Xu, Xiang-Rong; Yao, Zi-Wei; Wang, You-Shao; Zhang, Zai-Wang; Luo, Xiao-Jun; Mai, Bi-Xian
2015-10-01
Fish are often used as good bioindicators to monitor the occurrence of persistent organic pollutants (POPs) on different scales in recent years. Forty-five golden threads (Nemipterus virgatus) were collected from six sampling sites in the northern South China Sea (SCS) to investigate the geographical distribution of polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), dichlorodiphenyltrichloroethane and its metabolites (DDTs). Concentrations of PBDEs, PCBs, and DDTs ranged from 1.3-36.0, 2.3-76.5, 8.3-228 ng/g lipid weight, respectively. The highest PBDEs and DDTs concentrations were found in golden threads from Shantou, owing to the intensive electronic waste recycling activities and rapid development of agriculture. Samples from Haikou had the highest levels of PCBs, probably due to the existence of many shipbuilding yards in the past years. The concentrations of PBDEs and PCBs were found in a decreasing trend from east to west and from north to south, while DDTs concentrations had no obvious trend in the distribution. PCBs were the most prevalent contaminants in Xiamen and Yangjiang, while DDTs were the dominant compounds at the other four sampling sites. Different profiles of POPs at each sampling site may attribute to different pollution sources in the northern SCS. Ratios of (DDD + DDE)/DDTs in golden threads suggested the probability of fresh input of DDT in the northern SCS. The estimated daily intakes of PBDEs, PCBs and DDTs were 0.030-0.069, 0.167-0.258 and 0.105-1.88 ng/kg/day, respectively, which were significantly lower than the acceptable daily intake, suggesting that consumption of golden threads from the northern SCS would not subject the residents in the coastal areas of SCS to significant health risk.
NASA Astrophysics Data System (ADS)
Guimarães Nobre, Gabriela; Arnbjerg-Nielsen, Karsten; Rosbjerg, Dan; Madsen, Henrik
2016-04-01
Traditionally, flood risk assessment studies have been carried out from a univariate frequency analysis perspective. However, statistical dependence between hydrological variables, such as extreme rainfall and extreme sea surge, is plausible to exist, since both variables to some extent are driven by common meteorological conditions. Aiming to overcome this limitation, multivariate statistical techniques has the potential to combine different sources of flooding in the investigation. The aim of this study was to apply a range of statistical methodologies for analyzing combined extreme hydrological variables that can lead to coastal and urban flooding. The study area is the Elwood Catchment, which is a highly urbanized catchment located in the city of Port Phillip, Melbourne, Australia. The first part of the investigation dealt with the marginal extreme value distributions. Two approaches to extract extreme value series were applied (Annual Maximum and Partial Duration Series), and different probability distribution functions were fit to the observed sample. Results obtained by using the Generalized Pareto distribution demonstrate the ability of the Pareto family to model the extreme events. Advancing into multivariate extreme value analysis, first an investigation regarding the asymptotic properties of extremal dependence was carried out. As a weak positive asymptotic dependence between the bivariate extreme pairs was found, the Conditional method proposed by Heffernan and Tawn (2004) was chosen. This approach is suitable to model bivariate extreme values, which are relatively unlikely to occur together. The results show that the probability of an extreme sea surge occurring during a one-hour intensity extreme precipitation event (or vice versa) can be twice as great as what would occur when assuming independent events. Therefore, presuming independence between these two variables would result in severe underestimation of the flooding risk in the study area.
Climate change and malaria risk in Russia in 21st century
NASA Astrophysics Data System (ADS)
Malkhazova, S.; Shartova, N.
2010-09-01
The purpose of this research is development of prognostic model of malaria risk for Russia in the 21st century according to climate scenario IPCC "А2". The following issues have been formulated to reach the goal of the research: - define the basic epidemiological parameters describing malaria situation and methods of data processing; - creating of maps of malaria risk; - analysis of changes in malaria distribution for predictable future climate conditions in comparison with conditions of a modern climate. A lot of reasons (biological, social and economic) impact on malaria distribution. Nevertheless, incubation period of the parasite first of all depends on temperature. This is a primary factor that defines a potential area of infection, ability and specificity to transmit malaria. According to this, the model is based on the relationship between climate (average daily temperature) and the intensity of malaria transmission. The object of research is malaria parasite Plasmodium vivax, which has for Russia the greatest importance because it has the lowest minimal temperature threshold for development. Climate data is presented by daily average temperatures of air for three analyzed periods. 1961 -1989 describes a modern climate and corresponds to the minimum 30-year period that is necessary for an assessment of climate and changes connected with biotic components. Prognostic malaria model is based on predicted daily average temperatures for 2046-2065 (the middle of century) and 2089-2100 (the end of century). All data sets are presented in the grid 2х20. The conclusion on possible changes in malaria distribution and transmission in the middle and the end of the 21st century: There is going to be the increase of duration of effective temperatures period (period when parasite development is possible), period of effective susceptibility to infection of mosquitoes (period when malaria transmission cycle is possible); shift of the beginning of malaria transmission period to earlier time as well as the end of this period's shift to later time is connected to increase of effective temperatures annual sum. Northern bounds of the territory where temperature conditions allow parasite's development and disease transmission are going to move significantly to the north. Accordingly there will be an expansion of potential disease distribution area. Annual development of parasite and malaria transmission will probably be possible on nearly whole European part of Russia. The probability of malaria transmission and its intensity will increase. The results of the research indicate growth of malaria risk in Russia in 21st century.
Statistics of the geomagnetic secular variation for the past 5Ma
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1986-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
NASA Astrophysics Data System (ADS)
Krivoruchko, D. D.; Skrylev, A. V.
2018-01-01
The article deals with investigation of the excited states populations distribution of a low-temperature xenon plasma in the thruster with closed electron drift at 300 W operating conditions were investigated by laser-induced fluorescence (LIF) over the 350-1100 nm range. Seven xenon ions (Xe II) transitions were analyzed, while for neutral atoms (Xe I) just three transitions were explored, since the majority of Xe I emission falls into the ultraviolet or infrared part of the spectrum and are difficult to measure. The necessary spontaneous emission probabilities (Einstein coefficients) were calculated. Measurements of the excited state distribution were made for points (volume of about 12 mm3) all over the plane perpendicular to thruster axis in four positions on it (5, 10, 50 and 100 mm). Measured LIF signal intensity have differences for each location of researched point (due to anisotropy of thruster plume), however the structure of states populations distribution persisted at plume and is violated at the thruster exit plane and cathode area. Measured distributions show that for describing plasma of Hall thruster one needs to use a multilevel kinetic model, classic model can be used just for far plume region or for specific electron transitions.
Statistics of the geomagnetic secular variation for the past 5 m.y
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1988-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2010-12-01
This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.
Net present value probability distributions from decline curve reserves estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, D.E.; Huffman, C.H.; Thompson, R.S.
1995-12-31
This paper demonstrates how reserves probability distributions can be used to develop net present value (NPV) distributions. NPV probability distributions were developed from the rate and reserves distributions presented in SPE 28333. This real data study used practicing engineer`s evaluations of production histories. Two approaches were examined to quantify portfolio risk. The first approach, the NPV Relative Risk Plot, compares the mean NPV with the NPV relative risk ratio for the portfolio. The relative risk ratio is the NPV standard deviation (a) divided the mean ({mu}) NPV. The second approach, a Risk - Return Plot, is a plot of themore » {mu} discounted cash flow rate of return (DCFROR) versus the {sigma} for the DCFROR distribution. This plot provides a risk-return relationship for comparing various portfolios. These methods may help evaluate property acquisition and divestiture alternatives and assess the relative risk of a suite of wells or fields for bank loans.« less
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Belotti, Elisa; Weder, Nicole; Bufka, Luděk; Kaldhusdal, Arne; Küchenhoff, Helmut; Seibold, Heidi; Woelfing, Benno; Heurich, Marco
2015-01-01
In Central Europe, protected areas are too small to ensure survival of populations of large carnivores. In the surrounding areas, these species are often persecuted due to competition with game hunters. Therefore, understanding how predation intensity varies spatio-temporally across areas with different levels of protection is fundamental. We investigated the predation patterns of Eurasian lynx (Lynx lynx) on roe deer (Capreolus capreolus) and red deer (Cervus elaphus) in both protected areas and multi-use landscapes of the Bohemian Forest Ecosystem. Based on 359 roe and red deer killed by 10 GPS-collared lynx, we calculated the species-specific annual kill rates and tested for effects of season and lynx age, sex and reproductive status. Because roe and red deer in the study area concentrate in unprotected lowlands during winter, we modeled spatial distribution of kills separately for summer and winter and calculated-the probability of a deer killed by lynx and-the expected number of kills for areas with different levels of protection. Significantly more roe deer (46.05–74.71/year/individual lynx) were killed than red deer (1.57–9.63/year/individual lynx), more deer were killed in winter than in summer, and lynx family groups had higher annual kill rates than adult male, single adult female and subadult female lynx. In winter the probability of a deer killed and the expected number of kills were higher outside the most protected part of the study area than inside; in summer, this probability did not differ between areas, and the expected number of kills was slightly larger inside than outside the most protected part of the study area. This indicates that the intensity of lynx predation in the unprotected part of the Bohemian Forest Ecosystem increases in winter, thus mitigation of conflicts in these areas should be included as a priority in the lynx conservation strategy. PMID:26379142
Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas
2002-05-01
In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.
Substorm Occurrence and Intensity Associated With Three Types of Solar Wind Structure
NASA Astrophysics Data System (ADS)
Liou, Kan; Sotirelis, Thomas; Richardson, Ian
2018-01-01
This paper presents the results of a study of the characteristics of substorms that occurred during three distinct types of solar wind: coronal mass ejection (CME) associated, high-speed streams (HSS), and slow solar wind (SSW). A total number of 53,468 geomagnetic substorm onsets from 1983 to 2009 is used and sorted by the three solar wind types. It is found that the probability density function (PDF) of the intersubstorm time can be fitted by the combination of a dominant power law with an exponential cutoff component and a minor lognormal component, implying that substorms are associated with two distinctly different dynamical processes corresponding, perhaps, to the "externally driven" and "internally driven" processes, respectively. We compare substorm frequency and intensity associated with the three types of solar wind. It is found that the intersubstorm time is the longest during SSW and shortest during CME intervals. The averaged intersubstorm time for the internally driven substorms is 3.13, 3.15, and 7.96 h for CME, HSS, and SSW, respectively. The substorm intensity PDFs, as represented by the peak value of |
NASA Astrophysics Data System (ADS)
Mahanti, P.; Robinson, M. S.; Boyd, A. K.
2013-12-01
Craters ~20-km diameter and above significantly shaped the lunar landscape. The statistical nature of the slope distribution on their walls and floors dominate the overall slope distribution statistics for the lunar surface. Slope statistics are inherently useful for characterizing the current topography of the surface, determining accurate photometric and surface scattering properties, and in defining lunar surface trafficability [1-4]. Earlier experimental studies on the statistical nature of lunar surface slopes were restricted either by resolution limits (Apollo era photogrammetric studies) or by model error considerations (photoclinometric and radar scattering studies) where the true nature of slope probability distribution was not discernible at baselines smaller than a kilometer[2,3,5]. Accordingly, historical modeling of lunar surface slopes probability distributions for applications such as in scattering theory development or rover traversability assessment is more general in nature (use of simple statistical models such as the Gaussian distribution[1,2,5,6]). With the advent of high resolution, high precision topographic models of the Moon[7,8], slopes in lunar craters can now be obtained at baselines as low as 6-meters allowing unprecedented multi-scale (multiple baselines) modeling possibilities for slope probability distributions. Topographic analysis (Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) 2-m digital elevation models (DEM)) of ~20-km diameter Copernican lunar craters revealed generally steep slopes on interior walls (30° to 36°, locally exceeding 40°) over 15-meter baselines[9]. In this work, we extend the analysis from a probability distribution modeling point-of-view with NAC DEMs to characterize the slope statistics for the floors and walls for the same ~20-km Copernican lunar craters. The difference in slope standard deviations between the Gaussian approximation and the actual distribution (2-meter sampling) was computed over multiple scales. This slope analysis showed that local slope distributions are non-Gaussian for both crater walls and floors. Over larger baselines (~100 meters), crater wall slope probability distributions do approximate Gaussian distributions better, but have long distribution tails. Crater floor probability distributions however, were always asymmetric (for the baseline scales analyzed) and less affected by baseline scale variations. Accordingly, our results suggest that use of long tailed probability distributions (like Cauchy) and a baseline-dependant multi-scale model can be more effective in describing the slope statistics for lunar topography. Refrences: [1]Moore, H.(1971), JGR,75(11) [2]Marcus, A. H.(1969),JGR,74 (22).[3]R.J. Pike (1970),U.S. Geological Survey Working Paper [4]N. C. Costes, J. E. Farmer and E. B. George (1972),NASA Technical Report TR R-401 [5]M. N. Parker and G. L. Tyler(1973), Radio Science, 8(3),177-184 [6]Alekseev, V. A.et al (1968), Soviet Astronomy, Vol. 11, p.860 [7]Burns et al. (2012) Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XXXIX-B4, 483-488.[8]Smith et al. (2010) GRL 37, L18204, DOI: 10.1029/2010GL043751. [9]Wagner R., Robinson, M., Speyerer E., Mahanti, P., LPSC 2013, #2924.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
Probability and the changing shape of response distributions for orientation.
Anderson, Britt
2014-11-18
Spatial attention and feature-based attention are regarded as two independent mechanisms for biasing the processing of sensory stimuli. Feature attention is held to be a spatially invariant mechanism that advantages a single feature per sensory dimension. In contrast to the prediction of location independence, I found that participants were able to report the orientation of a briefly presented visual grating better for targets defined by high probability conjunctions of features and locations even when orientations and locations were individually uniform. The advantage for high-probability conjunctions was accompanied by changes in the shape of the response distributions. High-probability conjunctions had error distributions that were not normally distributed but demonstrated increased kurtosis. The increase in kurtosis could be explained as a change in the variances of the component tuning functions that comprise a population mixture. By changing the mixture distribution of orientation-tuned neurons, it is possible to change the shape of the discrimination function. This prompts the suggestion that attention may not "increase" the quality of perceptual processing in an absolute sense but rather prioritizes some stimuli over others. This results in an increased number of highly accurate responses to probable targets and, simultaneously, an increase in the number of very inaccurate responses. © 2014 ARVO.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Noise-sustained synchronization between electrically coupled FitzHugh-Nagumo networks
NASA Astrophysics Data System (ADS)
Cascallares, Guadalupe; Sánchez, Alejandro D.; dell'Erba, Matías G.; Izús, Gonzalo G.
2015-09-01
We investigate the capability of electrical synapses to transmit the noise-sustained network activity from one network to another. The particular setup we consider is two identical rings with excitable FitzHugh-Nagumo cell dynamics and nearest-neighbor antiphase intra-ring coupling, electrically coupled between corresponding nodes. The whole system is submitted to independent local additive Gaussian white noises with common intensity η, but only one ring is externally forced by a global adiabatic subthreshold harmonic signal. We then seek conditions for a particular noise level to promote synchronized stable firing patterns. By running numerical integrations with increasing η, we observe the excitation activity to become spatiotemporally self-organized, until η is so strong that spoils sync between networks for a given value of the electric coupling strength. By means of a four-cell model and calculating the stationary probability distribution, we obtain a (signal-dependent) non-equilibrium potential landscape which explains qualitatively the observed regimes, and whose barrier heights give a good estimate of the optimal noise intensity for the sync between networks.
Appetitive behavior, compulsivity, and neurochemistry in Prader-Willi syndrome.
Dimitropoulos, A; Feurer, I D; Roof, E; Stone, W; Butler, M G; Sutcliffe, J; Thompson, T
2000-01-01
Advances in genetic research have led to an increased understanding of genotype-phenotype relationships. Excessive eating and weight gain characteristic of Prader-Willi syndrome (PWS) have been the understandable focus of much of the research. The intense preoccupation with food, lack of satiation, and incessant food seeking are among the most striking features of PWS. It has become increasingly clear that the behavioral phenotype of PWS also includes symptoms similar to obsessive compulsive disorder, which in all probability interact with the incessant hunger and lack of satiation to engender the intense preoccupation and food seeking behavior that is characteristic of this disorder. Several lines of evidence suggest that genetic material on chromosome 15 may alter synthesis, release, metabolism, binding, intrinsic activity, or reuptake of specific neurotransmitters, or alter the receptor numbers and/or distribution involved in modulating feeding. Among the likely candidates are GABAnergic, serotonergic, and neuropeptidergic mechanisms. This review summarizes what is known about the appetitive behavior and compulsivity in PWS and discusses the possible mechanisms underlying these behaviors. MRDD Research Reviews 2000;6:125-130. Copyright 2000 Wiley-Liss, Inc.
Joint probabilities and quantum cognition
NASA Astrophysics Data System (ADS)
de Barros, J. Acacio
2012-12-01
In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.
Zhuang, Jiancang; Ogata, Yosihiko
2006-04-01
The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
NASA Astrophysics Data System (ADS)
Berra, F.; Felletti, F.
2011-04-01
The Lower Permian succession of the Central Southern Alps (Lombardy, Northern Italy) was deposited in fault-controlled continental basins, probably related to transtensional tectonics. We focussed our study on the stratigraphic record of the Lower Permian Orobic Basin, which consists of a 1000 m thick succession of prevailing continental clastics with intercalations of ignimbritic flows and tuffs (Pizzo del Diavolo Formation, PDV) resting on the underlying prevailing pyroclastic flows of the Cabianca Volcanite. The PDV consists of a lower part (composed of conglomerates passing laterally to sandstones and distally to silt and shales), a middle part (pelitic, with carbonates) and an upper part (alternating sandstone, silt and volcanic flows). Syndepositional tectonics during the deposition of the PDV is recorded by facies distribution, thickness changes and by the presence of deformation and liquefaction structures interpreted as seismites. Deformation is recorded by both ductile structures (ball-and-pillow, plastic intrusion, disturbed lamination, convolute stratification and slumps) and brittle structures (sand dykes and autoclastic breccias). Both the sedimentological features and the geodynamic setting of the depositional basin confidently support the interpretation of the described deformation features as related to seismic shocks. The most significant seismically-induced deformation is represented by a slumped horizon (about 4 m thick on average) which can be followed laterally for more than 5 km. The slumped bed consists of playa-lake deposits (alternating pelites and microbial carbonates, associated with mud cracks and vertebrate tracks). The lateral continuity and the evidence of deposition on a very low-angle surface along with the deformation/liquefaction of the sediments suggest that the slump was triggered by a high-magnitude earthquake. The stratigraphic distribution of the seismites allows us to identify time intervals of intense seismic activity, which correspond to rapid and basin-wide changes in the stratigraphical architecture of the depositional basin and/or to the reprise of the volcanic activity. The nature of the structures and their distribution suggest that the magnitude of the earthquakes responsible for the observed structures was likely higher than 5 (in order to produce sediment liquefaction) and probably reached intensity as high as 7 or more. The basin architecture suggests that the foci of these earthquakes were located close to the fault-controlled borders of the basin or within the basin itself.
NASA Astrophysics Data System (ADS)
Lewkowicz, A. G.; Smith, K. M.
2004-12-01
The BTS (Basal Temperature of Snow) method to predict permafrost probability in mountain basins uses elevation as an easily available and spatially distributed independent variable. The elevation coefficient in the BTS regression model is, in effect, a substitute for ground temperature lapse rates. Previous work in Wolf Creek (60° 8'N 135° W), a mountain basin near Whitehorse, has shown that the model breaks down in a mid-elevation valley (1250 m asl) where actual permafrost probability is roughly twice that predicted by the model (60% vs. 20-30%). The existence of a double tree-line at the site suggested that air temperature inversions might be the cause of this inaccuracy (Lewkowicz and Ednie, 2004). This paper reports on a first year (08/2003-08/2004) of hourly air and ground temperature data collected along an altitudinal transect within the valley in upper Wolf Creek. Measurements were made at sites located 4, 8, 22, 82 and 162 m above the valley floor. Air temperature inversions between the lowest and highest measurement points occurred 42% of the time and in all months, but were most frequent and intense in winter (>60% of December and January) and least frequent in September (<25% of time). They generally developed after sunset and reached a maximum amplitude before sunrise. Only 11 inversions that lasted through more than one day occurred during the year, and only from October to February. The longest continuous duration was 145 h while the greatest inversion magnitude measured over the 160 m transect was 19° C. Ground surface temperatures are more difficult to interpret because of differences in soils and vegetation cover along the transect and the effects of seasonal snow cover. In many cases, however, air temperature inversions are not duplicated in the ground temperature record. Nevertheless, the annual altitudinal ground temperature gradient is much lower than would be expected from a standard atmospheric lapse rate, suggesting that the inversions do have an important impact on permafrost distribution at this site. More generally, therefore, it appears probable that any reduction in inversion frequency resulting from a more vigorous atmospheric circulation in the context of future climate change, would have a significant effect on permafrost distribution in mountain basins.
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet
2016-12-01
Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
Voice-onset time and buzz-onset time identification: A ROC analysis
NASA Astrophysics Data System (ADS)
Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.
2004-05-01
Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.
Protoptila (Trichoptera) of Costa Rica and a Review of the Central American Fauna
NASA Astrophysics Data System (ADS)
Blahnik, R. J.; Holzenthal, R. W.
2005-05-01
Protoptila is the most diverse of the genera in the subfamily Protoptilinae of the family Glossosomatidae, currently with 80 species, but many more from the Neotropics awaiting description. Fourteen species occur in the United States, only one of which is also known to occur in Mexico. This compares to 38 species and one subspecies (or about half of the total for the genus) currently known from Central America, mostly described in papers by Mosely and Flint. Although the majority of these, or some 27 species, occur in Mexico, this probably more closely reflects the historical intensity of collecting rather than the real diversity by region. In Costa Rica, 11 species are currently known, 8 of which are restricted in distribution to Costa Rica, or Costa Rica and Panama, and only 3 with distributions extending to Mexico. We are describing an additional 8 species from Costa Rica, bringing to 19 the number of species now known from the country. This represents an incredible diversity for such a small country and also a very high level of implied endemism, even considering the likelihood that some of the species will be found to have wider distributions.
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Quantitative assessment of building fire risk to life safety.
Guanquan, Chu; Jinhua, Sun
2008-06-01
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
NASA Astrophysics Data System (ADS)
Bonatto, Cristian; Endler, Antonio
2017-07-01
We investigate the occurrence of extreme and rare events, i.e., giant and rare light pulses, in a periodically modulated CO2 laser model. Due to nonlinear resonant processes, we show a scenario of interaction between chaotic bands of different orders, which may lead to the formation of extreme and rare events. We identify a crisis line in the modulation parameter space, and we show that, when the modulation amplitude increases, remaining in the vicinity of the crisis, some statistical properties of the laser pulses, such as the average and dispersion of amplitudes, do not change much, whereas the amplitude of extreme events grows enormously, giving rise to extreme events with much larger deviations than usually reported, with a significant probability of occurrence, i.e., with a long-tailed non-Gaussian distribution. We identify recurrent regular patterns, i.e., precursors, that anticipate the emergence of extreme and rare events, and we associate these regular patterns with unstable periodic orbits embedded in a chaotic attractor. We show that the precursors may or may not lead to the emergence of extreme events. Thus, we compute the probability of success or failure (false alarm) in the prediction of the extreme events, once a precursor is identified in the deterministic time series. We show that this probability depends on the accuracy with which the precursor is identified in the laser intensity time series.
The Renner effect in triatomic molecules with application to CH+, MgNC and NH2.
Jensen, Per; Odaka, Tina Erica; Kraemer, W P; Hirano, Tsuneo; Bunker, P R
2002-03-01
We have developed a computational procedure, based on the variational method, for the calculation of the rovibronic energies of a triatomic molecule in an electronic state that become degenerate at the linear nuclear configuration. In such an electronic state the coupling caused by the electronic orbital angular momentum is very significant and it is called the Renner effect. We include it, and the effect of spin-orbit coupling, in our program. We have developed the procedure to the point where spectral line intensities can be calculated so that absorption and emission spectra can be simulated. In order to gain insight into the nature of the eigenfunctions, we have introduced and calculated the overall bending probability density function f(p) of the states. By projecting the eigenfunctions onto the Born-Oppenheimer basis, we have determined the probability density functions f+(rho) and f-(rho) associated with the individual Born-Oppenheimer states phi(-)elec and phi(+)elec. At a given temperature the Boltzmann averaged value of the f(p) over all the eigenstates gives the bending probability distribution function F(rho), and this can be related to the result of a Coulomb Explosion Imaging (CEI) experiment. We review our work and apply it to the molecules CH2+, MgNC and NH2, all of which are of astrophysical interest.
A multi-scale model for correlation in B cell VDJ usage of zebrafish
NASA Astrophysics Data System (ADS)
Pan, Keyao; Deem, Michael W.
2011-10-01
The zebrafish (Danio rerio) is one of the model animals used for the study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multi-scale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation (ODE) system to model the immune responses in the 6-month lifespan of a zebrafish. This mean field theory gives the number of high-affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a 'microscopic' random energy model. This generalized NK model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows first-principle calculation of the probability, p, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability p increases with the B cell population size and the B cell selection intensity. The probability p decreases with the B cell hypermutation rate. The multi-scale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Fly's eye condenser based on chirped microlens arrays
NASA Astrophysics Data System (ADS)
Wippermann, Frank C.; Zeitner, Uwe-D.; Dannberg, Peter; Bräuer, Andreas; Sinzinger, Stefan
2007-09-01
Lens array arrangements are commonly used for the beam shaping of almost arbitrary input intensity distributions into a top-hat. The setup usually consists of a Fourier lens and two identical regular microlens arrays - often referred to as tandem lens array - where the second one is placed in the focal plane of the first microlenses. Due to the periodic structure of regular arrays the output intensity distribution is modulated by equidistant sharp intensity peaks which are disturbing the homogeneity. The equidistantly located intensity peaks can be suppressed when using a chirped and therefore non-periodic microlens array. A far field speckle pattern with more densely and irregularly located intensity peaks results leading to an improved homogeneity of the intensity distribution. In contrast to stochastic arrays, chirped arrays consist of individually shaped lenses defined by a parametric description of the cells optical function which can be derived completely from analytical functions. This gives the opportunity to build up tandem array setups enabling to achieve far field intensity distribution with an envelope of a top-hat. We propose a new concept for fly's eye condensers incorporating a chirped tandem microlens array for the generation of a top-hat far field intensity distribution with improved homogenization under coherent illumination. The setup is compliant to reflow of photoresist as fabrication technique since plane substrates accommodating the arrays are used. Considerations for the design of the chirped microlens arrays, design rules, wave optical simulations and measurements of the far field intensity distributions are presented.
On probability-possibility transformations
NASA Technical Reports Server (NTRS)
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Newton/Poisson-Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Hidden regularity and universal classification of fast side chain motions in proteins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajeshwar, Rajitha; Smith, Jeremy C.; Krishnam, Marimuthu
Proteins display characteristic dynamical signatures that appear to be universal across all proteins regardless of topology and size. Here, we systematically characterize the universal features of fast side chain motions in proteins by examining the conformational energy surfaces of individual residues obtained using enhanced sampling molecular dynamics simulation (618 free energy surfaces obtained from 0.94 s MD simulation). The side chain conformational free energy surfaces obtained using the adaptive biasing force (ABF) method for a set of eight proteins with different molecular weights and secondary structures are used to determine the methyl axial NMR order parameters (O axis 2), populationsmore » of side chain rotamer states (ρ), conformational entropies (S conf), probability fluxes, and activation energies for side chain inter-rotameric transitions. The free energy barriers separating side chain rotamer states range from 0.3 to 12 kcal/mol in all proteins and follow a trimodal distribution with an intense peak at ~5 kcal/mol and two shoulders at ~3 and ~7.5 kcal/mol, indicating that some barriers are more favored than others by proteins to maintain a balance between their conformational stability and flexibility. The origin and the influences of the trimodal barrier distribution on the distribution of O axis 2 and the side chain conformational entropy are discussed. A hierarchical grading of rotamer states based on the conformational free energy barriers, entropy, and probability flux reveals three distinct classes of side chains in proteins. A unique nonlinear correlation is established between O axis 2 and the side chain rotamer populations (ρ). In conclusion, the apparent universality in O axis 2 versus correlation, trimodal barrier distribution, and distinct characteristics of three classes of side chains observed among all proteins indicates a hidden regularity (or commonality) in the dynamical heterogeneity of fast side chain motions in proteins.« less
Taubmann, Julia; Sharma, Koustubh; Uulu, Kubanychbek Zhumabai; Hines, James; Mishra, Charudutt
2015-01-01
The Endangered snow leopard Panthera uncia occurs in the Central Asian Mountains, which cover c. 2 million km2. Little is known about its status in the Kyrgyz Alay Mountains, a relatively narrow stretch of habitat connecting the southern and northern global ranges of the species. In 2010 we gathered information on current and past (1990, the last year of the Soviet Union) distributions of snow leopards and five sympatric large mammals across 14,000 km2 of the Kyrgyz Alay. We interviewed 95 key informants from local communities. Across 49 400-km2 grid cells we obtained 1,606 and 962 records of species occurrence (site use) in 1990 and 2010, respectively. The data were analysed using the multi-season site occupancy framework to incorporate uncertainty in detection across interviewees and time periods. High probability of use by snow leopards in the past was recorded in > 70% of the Kyrgyz Alay. Between the two sampling periods 39% of sites showed a high probability of local extinction of snow leopard. We also recorded high probability of local extinction of brown bear Ursus arctos (84% of sites) and Marco Polo sheep Ovis ammon polii (47% of sites), mainly in regions used intensively by people. Data indicated a high probability of local colonization by lynx Lynx lynx in 41% of the sites. Although wildlife has declined in areas of central and eastern Alay, regions in the north-west, and the northern and southern fringes appear to retain high conservation value.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America
Flint, Richard; Windsor, John A
2004-04-01
The physiological response to treatment is a better predictor of outcome in acute pancreatitis than are traditional static measures. Retrospective diagnostic test study. The criterion standard was Organ Failure Score (OFS) and Acute Physiology and Chronic Health Evaluation II (APACHE II) score at the time of hospital admission. Intensive care unit of a tertiary referral center, Auckland City Hospital, Auckland, New Zealand. Consecutive sample of 92 patients (60 male, 32 female; median age, 61 years; range, 24-79 years) with severe acute pancreatitis. Twenty patients were not included because of incomplete data. The cause of pancreatitis was gallstones (42%), alcohol use (27%), or other (31%). At hospital admission, the mean +/- SD OFS was 8.1 +/- 6.1, and the mean +/- SD APACHE II score was 19.9 +/- 8.2. All cases were managed according to a standardized protocol. There was no randomization or testing of any individual interventions. Survival and death. There were 32 deaths (pretest probability of dying was 35%). The physiological response to treatment was more accurate in predicting the outcome than was OFS or APACHE II score at hospital admission. For example, 17 patients had an initial OFS of 7-8 (posttest probability of dying was 58%); after 48 hours, 7 had responded to treatment (posttest probability of dying was 28%), and 10 did not respond (posttest probability of dying was 82%). The effect of the change in OFS and APACHE II score was graphically depicted by using a series of logistic regression equations. The resultant sigmoid curve suggests that there is a midrange of scores (the steep portion of the graph) within which the probability of death is most affected by the response to intensive care treatment. Measuring the initial severity of pancreatitis combined with the physiological response to intensive care treatment is a practical and clinically relevant approach to predicting death in patients with severe acute pancreatitis.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
NASA Astrophysics Data System (ADS)
Laws, William R.; Ross, J. B. Alexander
1992-04-01
The time-resolved fluorescence properties of a tryptophan residue should be useful for probing protein structure, function, and dynamics. To date, however, the non-single exponential fluorescence intensity decay kinetics for numerous peptides and proteins having a single tryptophan residue have not been adequately explained. Many possibilities have been considered and include: (1) contributions from the 1La and 1Lb states of indole; (2) excited-state hydrogen exchange; and (3) environmental heterogeneity from (chi) 1 and (chi) 2 rotamers. In addition, it has been suggested that generally many factors contribute to the decay and a distribution of probabilities may be more appropriate. Two recent results support multiple species due to conformational heterogeneity as the major contributor to complex kinetics. First, a rotationally constrained tryptophan analogue has fluorescence intensity decay kinetics that can be described by the sum of two exponentials with amplitudes comparable to the relative populations of the two rotational isomers. Second, the multiple exponentials observed for tyrosine-containing model compounds and peptides correlate with the (chi) 1 rotamer populations independently determined by 1H NMR. We now report similar correlations between rotamer populations and fluorescence intensity decay kinetics for a tryptophan analogue of oxytocin. It appears for this compound that either (chi) 2 rotations do not appreciably alter the indole environment, (chi) 2 rotations are rapid enough to average the observed dependence, or only one of two possible (chi) 2 populations is associated with each (chi) 1 rotamer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobi, Annika, E-mail: Annika.Jakobi@OncoRay.de; Bandurska-Luque, Anna; Department of Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden
Purpose: The purpose of this study was to determine, by treatment plan comparison along with normal tissue complication probability (NTCP) modeling, whether a subpopulation of patients with head and neck squamous cell carcinoma (HNSCC) could be identified that would gain substantial benefit from proton therapy in terms of NTCP. Methods and Materials: For 45 HNSCC patients, intensity modulated radiation therapy (IMRT) was compared to intensity modulated proton therapy (IMPT). Physical dose distributions were evaluated as well as the resulting NTCP values, using modern models for acute mucositis, xerostomia, aspiration, dysphagia, laryngeal edema, and trismus. Patient subgroups were defined based onmore » primary tumor location. Results: Generally, IMPT reduced the NTCP values while keeping similar target coverage for all patients. Subgroup analyses revealed a higher individual reduction of swallowing-related side effects by IMPT for patients with tumors in the upper head and neck area, whereas the risk reduction of acute mucositis was more pronounced in patients with tumors in the larynx region. More patients with tumors in the upper head and neck area had a reduction in NTCP of more than 10%. Conclusions: Subgrouping can help to identify patients who may benefit more than others from the use of IMPT and, thus, can be a useful tool for a preselection of patients in the clinic where there are limited PT resources. Because the individual benefit differs within a subgroup, the relative merits should additionally be evaluated by individual treatment plan comparisons.« less
How to model a negligible probability under the WTO sanitary and phytosanitary agreement?
Powell, Mark R
2013-06-01
Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
2017-03-27
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Simulations of large acoustic scintillations in the straits of Florida.
Tang, Xin; Tappert, F D; Creamer, Dennis B
2006-12-01
Using a full-wave acoustic model, Monte Carlo numerical studies of intensity fluctuations in a realistic shallow water environment that simulates the Straits of Florida, including internal wave fluctuations and bottom roughness, have been performed. Results show that the sound intensity at distant receivers scintillates dramatically. The acoustic scintillation index SI increases rapidly with propagation range and is significantly greater than unity at ranges beyond about 10 km. This result supports a theoretical prediction by one of the authors. Statistical analyses show that the distribution of intensity of the random wave field saturates to the expected Rayleigh distribution with SI= 1 at short range due to multipath interference effects, and then SI continues to increase to large values. This effect, which is denoted supersaturation, is universal at long ranges in waveguides having lossy boundaries (where there is differential mode attenuation). The intensity distribution approaches a log-normal distribution to an excellent approximation; it may not be a universal distribution and comparison is also made to a K distribution. The long tails of the log-normal distribution cause "acoustic intermittency" in which very high, but rare, intensities occur.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu
In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less
Extreme geomagnetic storms: Probabilistic forecasts and their uncertainties
Riley, Pete; Love, Jeffrey J.
2017-01-01
Extreme space weather events are low-frequency, high-risk phenomena. Estimating their rates of occurrence, as well as their associated uncertainties, is difficult. In this study, we derive statistical estimates and uncertainties for the occurrence rate of an extreme geomagnetic storm on the scale of the Carrington event (or worse) occurring within the next decade. We model the distribution of events as either a power law or lognormal distribution and use (1) Kolmogorov-Smirnov statistic to estimate goodness of fit, (2) bootstrapping to quantify the uncertainty in the estimates, and (3) likelihood ratio tests to assess whether one distribution is preferred over another. Our best estimate for the probability of another extreme geomagnetic event comparable to the Carrington event occurring within the next 10 years is 10.3% 95% confidence interval (CI) [0.9,18.7] for a power law distribution but only 3.0% 95% CI [0.6,9.0] for a lognormal distribution. However, our results depend crucially on (1) how we define an extreme event, (2) the statistical model used to describe how the events are distributed in intensity, (3) the techniques used to infer the model parameters, and (4) the data and duration used for the analysis. We test a major assumption that the data represent time stationary processes and discuss the implications. If the current trends persist, suggesting that we are entering a period of lower activity, our forecasts may represent upper limits rather than best estimates.
Application of cause-and-effect analysis to potentiometric titration.
Kufelnicki, A; Lis, S; Meinrath, G
2005-08-01
A first attempt has been made to interpret physicochemical data from potentiometric titration analysis in accordance with the complete measurement-uncertainty budget approach (bottom-up) of ISO and Eurachem. A cause-and-effect diagram is established and discussed. Titration data for arsenazo III are used as a basis for this discussion. The commercial software Superquad is used and applied within a computer-intensive resampling framework. The cause-and-effect diagram is applied to evaluation of seven protonation constants of arsenazo III in the pH range 2-10.7. The data interpretation is based on empirical probability distributions and their analysis by second-order correct confidence estimates. The evaluated data are applied in the calculation of a speciation diagram including uncertainty estimates using the probabilistic speciation software Ljungskile.
Introducing the Met Office 2.2-km Europe-wide convection-permitting regional climate simulations
NASA Astrophysics Data System (ADS)
Kendon, Elizabeth J.; Chan, Steven C.; Berthou, Segolene; Fosser, Giorgia; Roberts, Malcolm J.; Fowler, Hayley J.
2017-04-01
The Met Office is currently conducting Europe-wide 2.2-km convection-permitting model (CPM) simulations driven by ERA-Interim reanalysis and present/future-climate GCM simulations. Here, we present the preliminary results of these new European simulations examining daily and sub-daily precipitation outputs in comparison with observations across Europe, 12-km European and 1.5-km UK climate model simulations. As the simulations are not yet complete, we focus on diagnostics that are relatively robust with a limited amount of data; for instance, the diurnal cycle and the probability distribution of daily and sub-daily precipitation intensities. We will also present specific case studies that showcase the benefits of using continental-scale CPM simulations over previously-available small-domain CPM simulations.
Ground Water and Climate Change
NASA Technical Reports Server (NTRS)
Taylor, Richard G.; Scanlon, Bridget; Doell, Petra; Rodell, Matt; van Beek, Rens; Wada, Yoshihide; Longuevergne, Laurent; Leblanc, Marc; Famiglietti, James S.; Edmunds, Mike;
2013-01-01
As the world's largest distributed store of fresh water, ground water plays a central part in sustaining ecosystems and enabling human adaptation to climate variability and change. The strategic importance of ground water for global water and food security will probably intensify under climate change as more frequent and intense climate extremes (droughts and floods) increase variability in precipitation, soil moisture and surface water. Here we critically review recent research assessing the impacts of climate on ground water through natural and human-induced processes as well as through groundwater-driven feedbacks on the climate system. Furthermore, we examine the possible opportunities and challenges of using and sustaining groundwater resources in climate adaptation strategies, and highlight the lack of groundwater observations, which, at present, limits our understanding of the dynamic relationship between ground water and climate.
Ortiz-Rascón, E; Bruce, N C; Garduño-Mejía, J; Carrillo-Torres, R; Hernández-Paredes, J; Álvarez-Ramos, M E
2017-11-20
This paper discusses the main differences between two different methods for determining the optical properties of tissue optical phantoms by fitting the spatial and temporal intensity distribution functions to the diffusion approximation theory. The consistency in the values of the optical properties is verified by changing the width of the recipient containing the turbid medium; as the optical properties are an intrinsic value of the scattering medium, independently of the recipient width, the stability in these values for different widths implies a better measurement system for the acquisition of the optical properties. It is shown that the temporal fitting method presents higher stability than the spatial fitting method; this is probably due to the addition of the time of flight parameter into the diffusion theory.
Universality of local dissipation scales in buoyancy-driven turbulence.
Zhou, Quan; Xia, Ke-Qing
2010-03-26
We report an experimental investigation of the local dissipation scale field eta in turbulent thermal convection. Our results reveal two types of universality of eta. The first one is that, for the same flow, the probability density functions (PDFs) of eta are insensitive to turbulent intensity and large-scale inhomogeneity and anisotropy of the system. The second is that the small-scale dissipation dynamics in buoyancy-driven turbulence can be described by the same models developed for homogeneous and isotropic turbulence. However, the exact functional form of the PDF of the local dissipation scale is not universal with respect to different types of flows, but depends on the integral-scale velocity boundary condition, which is found to have an exponential, rather than Gaussian, distribution in turbulent Rayleigh-Bénard convection.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Ground water and climate change
Taylor, Richard G.; Scanlon, Bridget R.; Döll, Petra; Rodell, Matt; van Beek, Rens; Wada, Yoshihide; Longuevergne, Laurent; Leblanc, Marc; Famiglietti, James S.; Edmunds, Mike; Konikow, Leonard F.; Green, Timothy R.; Chen, Jianyao; Taniguchi, Makoto; Bierkens, Marc F.P.; MacDonald, Alan; Fan, Ying; Maxwell, Reed M.; Yechieli, Yossi; Gurdak, Jason J.; Allen, Diana M.; Shamsudduha, Mohammad; Hiscock, Kevin; Yeh, Pat J.-F.; Holman, Ian; Treidel, Holger
2012-01-01
As the world's largest distributed store of fresh water, ground water plays a central part in sustaining ecosystems and enabling human adaptation to climate variability and change. The strategic importance of ground water for global water and food security will probably intensify under climate change as more frequent and intense climate extremes (droughts and floods) increase variability in precipitation, soil moisture and surface water. Here we critically review recent research assessing the impacts of climate on ground water through natural and human-induced processes as well as through groundwater-driven feedbacks on the climate system. Furthermore, we examine the possible opportunities and challenges of using and sustaining groundwater resources in climate adaptation strategies, and highlight the lack of groundwater observations, which, at present, limits our understanding of the dynamic relationship between ground water and climate.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
NASA Astrophysics Data System (ADS)
Jiang, Runqing
Intensity-modulated radiation therapy (IMRT) uses non-uniform beam intensities within a radiation field to provide patient-specific dose shaping, resulting in a dose distribution that conforms tightly to the planning target volume (PTV). Unavoidable geometric uncertainty arising from patient repositioning and internal organ motion can lead to lower conformality index (CI) during treatment delivery, a decrease in tumor control probability (TCP) and an increase in normal tissue complication probability (NTCP). The CI of the IMRT plan depends heavily on steep dose gradients between the PTV and organ at risk (OAR). Geometric uncertainties reduce the planned dose gradients and result in a less steep or "blurred" dose gradient. The blurred dose gradients can be maximized by constraining the dose objective function in the static IMRT plan or by reducing geometric uncertainty during treatment with corrective verification imaging. Internal organ motion and setup error were evaluated simultaneously for 118 individual patients with implanted fiducials and MV electronic portal imaging (EPI). A Gaussian probability density function (PDF) is reasonable for modeling geometric uncertainties as indicated by the 118 patients group. The Gaussian PDF is patient specific and group standard deviation (SD) should not be used for accurate treatment planning for individual patients. In addition, individual SD should not be determined or predicted from small imaging samples because of random nature of the fluctuations. Frequent verification imaging should be employed in situations where geometric uncertainties are expected. Cumulative PDF data can be used for re-planning to assess accuracy of delivered dose. Group data is useful for determining worst case discrepancy between planned and delivered dose. The margins for the PTV should ideally represent true geometric uncertainties. The measured geometric uncertainties were used in this thesis to assess PTV coverage, dose to OAR, equivalent uniform dose per fraction (EUDf) and NTCP. The dose distribution including geometric uncertainties was determined from integration of the convolution of the static dose gradient with the PDF. Integration of the convolution of the static dose and derivative of the PDF can also be used to determine the dose including geometric uncertainties although this method was not investigated in detail. Local maximum dose gradient (LMDG) was determined via optimization of dose objective function by manually adjusting DVH control points or selecting beam numbers and directions during IMRT treatment planning. Minimum SD (SDmin) is used when geometric uncertainty is corrected with verification imaging. Maximum SD (SDmax) is used when the geometric uncertainty is known to be large and difficult to manage. SDmax was 4.38 mm in anterior-posterior (AP) direction, 2.70 mm in left-right (LR) direction and 4.35 mm in superior-inferior (SI) direction; SDmin was 1.1 mm in all three directions if less than 2 mm threshold was used for uncorrected fractions in every direction. EUDf is a useful QA parameter for interpreting the biological impact of geometric uncertainties on the static dose distribution. The EUD f has been used as the basis for the time-course NTCP evaluation in the thesis. Relative NTCP values are useful for comparative QA checking by normalizing known complications (e.g. reported in the RTOG studies) to specific DVH control points. For prostate cancer patients, rectal complications were evaluated from specific RTOG clinical trials and detailed evaluation of the treatment techniques (e.g. dose prescription, DVH, number of beams, bean angles). Treatment plans that did not meet DVH constraints represented additional complication risk. Geometric uncertainties improved or worsened rectal NTCP depending on individual internal organ motion within patient.
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
Statistical characteristics of mechanical heart valve cavitation in accelerated testing.
Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M
2004-07-01
Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.
Deveau, Michael A; Gutiérrez, Alonso N; Mackie, Thomas R; Tomé, Wolfgang A; Forrest, Lisa J
2010-01-01
Intensity-modulated radiation therapy (IMRT) can be employed to yield precise dose distributions that tightly conform to targets and reduce high doses to normal structures by generating steep dose gradients. Because of these sharp gradients, daily setup variations may have an adverse effect on clinical outcome such that an adjacent normal structure may be overdosed and/or the target may be underdosed. This study provides a detailed analysis of the impact of daily setup variations on optimized IMRT canine nasal tumor treatment plans when variations are not accounted for due to the lack of image guidance. Setup histories of ten patients with nasal tumors previously treated using helical tomotherapy were replanned retrospectively to study the impact of daily setup variations on IMRT dose distributions. Daily setup shifts were applied to IMRT plans on a fraction-by-fraction basis. Using mattress immobilization and laser alignment, mean setup error magnitude in any single dimension was at least 2.5 mm (0-10.0 mm). With inclusions of all three translational coordinates, mean composite offset vector was 5.9 +/- 3.3 mm. Due to variations, a loss of equivalent uniform dose for target volumes of up to 5.6% was noted which corresponded to a potential loss in tumor control probability of 39.5%. Overdosing of eyes and brain was noted by increases in mean normalized total dose and highest normalized dose given to 2% of the volume. Findings suggest that successful implementation of canine nasal IMRT requires daily image guidance to ensure accurate delivery of precise IMRT distributions when non-rigid immobilization techniques are utilized. Unrecognized geographical misses may result in tumor recurrence and/or radiation toxicities to the eyes and brain.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2017-05-01
We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2018-04-01
We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.
The role of emotions in the maintenance of cooperative behaviors
NASA Astrophysics Data System (ADS)
Zhang, Chunyan; Zhang, Jianlei; Weissing, Franz J.
2014-04-01
Our attention is focused on how individual emotions influence collective behaviors, which captures an aspect of reality missing from past studies: free riders may suffer some stress, which could adapt jointly with the individual stress intensity and size of the gaming group. With an evolutionary game theoretical approach, we gain the fixation probability for one mutant cooperator to invade and dominate the whole defecting population. When the stress intensity exceeds a threshold, natural selection favors cooperators replacing defectors in a finite population. We further infer that lower stress intensity is sufficient for one mutant cooperator to become fixed with an advantageous probability in a larger population. Moreover, when the gaming group is smaller than the population size, the more the return from the public goods, the lower the threshold of stress intensity required to facilitate the full dominance of cooperators. We hope our studies may show that individual sentiments or psychological activities will open up novel explanations for the puzzle of collective actions.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Universal laws of human society's income distribution
NASA Astrophysics Data System (ADS)
Tao, Yong
2015-10-01
General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.